Test Report: Docker_Linux_containerd 12739

                    
                      7c21f9163ae8b175cef980961032eb5d83504bec:2021-12-31:22031
                    
                

Test fail (19/266)

x
+
TestNetworkPlugins/group/calico/Start (548.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20211231101408-6736 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20211231101408-6736 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd: exit status 80 (9m8.706915602s)

                                                
                                                
-- stdout --
	* [calico-20211231101408-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node calico-20211231101408-6736 in cluster calico-20211231101408-6736
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	  - kubelet.global-housekeeping-interval=60m
	  - kubelet.housekeeping-interval=5m
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1231 10:20:37.324893  184020 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:20:37.325019  184020 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:20:37.325032  184020 out.go:310] Setting ErrFile to fd 2...
	I1231 10:20:37.325040  184020 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:20:37.325195  184020 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:20:37.325556  184020 out.go:304] Setting JSON to false
	I1231 10:20:37.327433  184020 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3792,"bootTime":1640942245,"procs":706,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:20:37.327532  184020 start.go:122] virtualization: kvm guest
	I1231 10:20:37.332055  184020 out.go:176] * [calico-20211231101408-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:20:37.332311  184020 notify.go:174] Checking for updates...
	I1231 10:20:37.334837  184020 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:20:37.337409  184020 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:20:37.340145  184020 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:20:37.342690  184020 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:20:37.344946  184020 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:20:37.345819  184020 config.go:176] Loaded profile config "cilium-20211231101408-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:20:37.345977  184020 config.go:176] Loaded profile config "custom-weave-20211231101408-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:20:37.346140  184020 config.go:176] Loaded profile config "running-upgrade-20211231101758-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1231 10:20:37.346223  184020 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:20:37.405802  184020 docker.go:132] docker version: linux-20.10.12
	I1231 10:20:37.405955  184020 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:20:37.556816  184020 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:20:37.450031072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:20:37.556946  184020 docker.go:237] overlay module found
	I1231 10:20:37.560041  184020 out.go:176] * Using the docker driver based on user configuration
	I1231 10:20:37.560086  184020 start.go:280] selected driver: docker
	I1231 10:20:37.560094  184020 start.go:795] validating driver "docker" against <nil>
	I1231 10:20:37.560114  184020 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:20:37.560132  184020 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:20:37.560138  184020 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:20:37.560169  184020 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:20:37.560188  184020 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I1231 10:20:37.562177  184020 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:20:37.562951  184020 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:20:37.689579  184020 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:20:37.611298746 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:20:37.689737  184020 start_flags.go:284] no existing cluster config was found, will generate one from the flags 
	I1231 10:20:37.689979  184020 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1231 10:20:37.690001  184020 cni.go:93] Creating CNI manager for "calico"
	I1231 10:20:37.690009  184020 start_flags.go:293] Found "Calico" CNI - setting NetworkPlugin=cni
	I1231 10:20:37.690029  184020 start_flags.go:298] config:
	{Name:calico-20211231101408-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:calico-20211231101408-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:20:37.693095  184020 out.go:176] * Starting control plane node calico-20211231101408-6736 in cluster calico-20211231101408-6736
	I1231 10:20:37.693168  184020 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:20:37.695434  184020 out.go:176] * Pulling base image ...
	I1231 10:20:37.695491  184020 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:20:37.695529  184020 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:20:37.695552  184020 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4
	I1231 10:20:37.695570  184020 cache.go:57] Caching tarball of preloaded images
	I1231 10:20:37.695869  184020 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:20:37.695904  184020 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.1 on containerd
	I1231 10:20:37.696066  184020 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/config.json ...
	I1231 10:20:37.696094  184020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/config.json: {Name:mk92e3eb42ee64a1640886650672239949db69fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:20:37.742854  184020 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:20:37.742887  184020 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:20:37.742903  184020 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:20:37.742943  184020 start.go:313] acquiring machines lock for calico-20211231101408-6736: {Name:mkeb730f94b84ccf9a06995c883045613561e5bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:20:37.743200  184020 start.go:317] acquired machines lock for "calico-20211231101408-6736" in 208.569µs
	I1231 10:20:37.743257  184020 start.go:89] Provisioning new machine with config: &{Name:calico-20211231101408-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:calico-20211231101408-6736 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker} &{Name: IP: Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:20:37.743398  184020 start.go:126] createHost starting for "" (driver="docker")
	I1231 10:20:37.747054  184020 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1231 10:20:37.747393  184020 start.go:160] libmachine.API.Create for "calico-20211231101408-6736" (driver="docker")
	I1231 10:20:37.747455  184020 client.go:168] LocalClient.Create starting
	I1231 10:20:37.747536  184020 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem
	I1231 10:20:37.747581  184020 main.go:130] libmachine: Decoding PEM data...
	I1231 10:20:37.747603  184020 main.go:130] libmachine: Parsing certificate...
	I1231 10:20:37.747675  184020 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem
	I1231 10:20:37.747699  184020 main.go:130] libmachine: Decoding PEM data...
	I1231 10:20:37.747717  184020 main.go:130] libmachine: Parsing certificate...
	I1231 10:20:37.748123  184020 cli_runner.go:133] Run: docker network inspect calico-20211231101408-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1231 10:20:37.791425  184020 cli_runner.go:180] docker network inspect calico-20211231101408-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1231 10:20:37.791502  184020 network_create.go:254] running [docker network inspect calico-20211231101408-6736] to gather additional debugging logs...
	I1231 10:20:37.791532  184020 cli_runner.go:133] Run: docker network inspect calico-20211231101408-6736
	W1231 10:20:37.833180  184020 cli_runner.go:180] docker network inspect calico-20211231101408-6736 returned with exit code 1
	I1231 10:20:37.833232  184020 network_create.go:257] error running [docker network inspect calico-20211231101408-6736]: docker network inspect calico-20211231101408-6736: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20211231101408-6736
	I1231 10:20:37.833262  184020 network_create.go:259] output of [docker network inspect calico-20211231101408-6736]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20211231101408-6736
	
	** /stderr **
	I1231 10:20:37.833309  184020 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:20:37.876797  184020 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-dae06f601d93 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:46:37:ee:90}}
	I1231 10:20:37.877732  184020 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc0000104d0] misses:0}
	I1231 10:20:37.877784  184020 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1231 10:20:37.877803  184020 network_create.go:106] attempt to create docker network calico-20211231101408-6736 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1231 10:20:37.877856  184020 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20211231101408-6736
	I1231 10:20:37.991990  184020 network_create.go:90] docker network calico-20211231101408-6736 192.168.58.0/24 created
	I1231 10:20:37.992048  184020 kic.go:106] calculated static IP "192.168.58.2" for the "calico-20211231101408-6736" container
	I1231 10:20:37.992123  184020 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I1231 10:20:38.047495  184020 cli_runner.go:133] Run: docker volume create calico-20211231101408-6736 --label name.minikube.sigs.k8s.io=calico-20211231101408-6736 --label created_by.minikube.sigs.k8s.io=true
	I1231 10:20:38.088317  184020 oci.go:102] Successfully created a docker volume calico-20211231101408-6736
	I1231 10:20:38.088397  184020 cli_runner.go:133] Run: docker run --rm --name calico-20211231101408-6736-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20211231101408-6736 --entrypoint /usr/bin/test -v calico-20211231101408-6736:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib
	I1231 10:20:39.168782  184020 cli_runner.go:186] Completed: docker run --rm --name calico-20211231101408-6736-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20211231101408-6736 --entrypoint /usr/bin/test -v calico-20211231101408-6736:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib: (1.080341269s)
	I1231 10:20:39.168820  184020 oci.go:106] Successfully prepared a docker volume calico-20211231101408-6736
	I1231 10:20:39.168878  184020 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:20:39.168900  184020 kic.go:179] Starting extracting preloaded images to volume ...
	I1231 10:20:39.168972  184020 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20211231101408-6736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir
	I1231 10:20:55.392947  184020 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20211231101408-6736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir: (16.223912731s)
	I1231 10:20:55.392990  184020 kic.go:188] duration metric: took 16.224086 seconds to extract preloaded images to volume
	W1231 10:20:55.393052  184020 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1231 10:20:55.393072  184020 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I1231 10:20:55.393141  184020 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1231 10:20:55.571947  184020 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20211231101408-6736 --name calico-20211231101408-6736 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20211231101408-6736 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20211231101408-6736 --network calico-20211231101408-6736 --ip 192.168.58.2 --volume calico-20211231101408-6736:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b
	I1231 10:20:56.308703  184020 cli_runner.go:133] Run: docker container inspect calico-20211231101408-6736 --format={{.State.Running}}
	I1231 10:20:56.365855  184020 cli_runner.go:133] Run: docker container inspect calico-20211231101408-6736 --format={{.State.Status}}
	I1231 10:20:56.439353  184020 cli_runner.go:133] Run: docker exec calico-20211231101408-6736 stat /var/lib/dpkg/alternatives/iptables
	I1231 10:20:56.565380  184020 oci.go:175] the created container "calico-20211231101408-6736" has a running status.
	I1231 10:20:56.565436  184020 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/calico-20211231101408-6736/id_rsa...
	I1231 10:20:56.700127  184020 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/calico-20211231101408-6736/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1231 10:20:56.924521  184020 cli_runner.go:133] Run: docker container inspect calico-20211231101408-6736 --format={{.State.Status}}
	I1231 10:20:57.007718  184020 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1231 10:20:57.007745  184020 kic_runner.go:114] Args: [docker exec --privileged calico-20211231101408-6736 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1231 10:20:57.172990  184020 cli_runner.go:133] Run: docker container inspect calico-20211231101408-6736 --format={{.State.Status}}
	I1231 10:20:57.255571  184020 machine.go:88] provisioning docker machine ...
	I1231 10:20:57.255774  184020 ubuntu.go:169] provisioning hostname "calico-20211231101408-6736"
	I1231 10:20:57.255991  184020 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211231101408-6736
	I1231 10:20:57.328415  184020 main.go:130] libmachine: Using SSH client type: native
	I1231 10:20:57.328630  184020 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49372 <nil> <nil>}
	I1231 10:20:57.328649  184020 main.go:130] libmachine: About to run SSH command:
	sudo hostname calico-20211231101408-6736 && echo "calico-20211231101408-6736" | sudo tee /etc/hostname
	I1231 10:20:57.746471  184020 main.go:130] libmachine: SSH cmd err, output: <nil>: calico-20211231101408-6736
	
	I1231 10:20:57.746551  184020 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211231101408-6736
	I1231 10:20:57.810729  184020 main.go:130] libmachine: Using SSH client type: native
	I1231 10:20:57.811210  184020 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49372 <nil> <nil>}
	I1231 10:20:57.811247  184020 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20211231101408-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20211231101408-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20211231101408-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:20:57.969492  184020 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:20:57.969530  184020 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:20:57.969581  184020 ubuntu.go:177] setting up certificates
	I1231 10:20:57.969593  184020 provision.go:83] configureAuth start
	I1231 10:20:57.969656  184020 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20211231101408-6736
	I1231 10:20:58.027151  184020 provision.go:138] copyHostCerts
	I1231 10:20:58.027224  184020 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:20:58.027234  184020 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:20:58.027294  184020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:20:58.027412  184020 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:20:58.027428  184020 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:20:58.027451  184020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:20:58.027520  184020 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:20:58.027526  184020 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:20:58.027547  184020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:20:58.027623  184020 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.calico-20211231101408-6736 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20211231101408-6736]
	I1231 10:20:58.127885  184020 provision.go:172] copyRemoteCerts
	I1231 10:20:58.127965  184020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:20:58.128004  184020 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211231101408-6736
	I1231 10:20:58.175234  184020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49372 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/calico-20211231101408-6736/id_rsa Username:docker}
	I1231 10:20:58.281726  184020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
	I1231 10:20:58.383526  184020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1231 10:20:58.413332  184020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:20:58.444142  184020 provision.go:86] duration metric: configureAuth took 474.520604ms
	I1231 10:20:58.444182  184020 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:20:58.444412  184020 config.go:176] Loaded profile config "calico-20211231101408-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:20:58.444430  184020 machine.go:91] provisioned docker machine in 1.188821599s
	I1231 10:20:58.444445  184020 client.go:171] LocalClient.Create took 20.696983265s
	I1231 10:20:58.444457  184020 start.go:168] duration metric: libmachine.API.Create for "calico-20211231101408-6736" took 20.697065687s
	I1231 10:20:58.444466  184020 start.go:267] post-start starting for "calico-20211231101408-6736" (driver="docker")
	I1231 10:20:58.444471  184020 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:20:58.444530  184020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:20:58.444579  184020 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211231101408-6736
	I1231 10:20:58.503527  184020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49372 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/calico-20211231101408-6736/id_rsa Username:docker}
	I1231 10:20:58.610685  184020 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:20:58.615202  184020 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:20:58.615267  184020 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:20:58.615285  184020 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:20:58.615294  184020 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:20:58.615310  184020 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:20:58.615497  184020 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:20:58.615693  184020 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:20:58.615881  184020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:20:58.628881  184020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:20:58.652604  184020 start.go:270] post-start completed in 208.122374ms
	I1231 10:20:58.653092  184020 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20211231101408-6736
	I1231 10:20:58.706294  184020 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/config.json ...
	I1231 10:20:58.706638  184020 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:20:58.706694  184020 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211231101408-6736
	I1231 10:20:58.763255  184020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49372 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/calico-20211231101408-6736/id_rsa Username:docker}
	I1231 10:20:58.869471  184020 start.go:129] duration metric: createHost completed in 21.126057291s
	I1231 10:20:58.869515  184020 start.go:80] releasing machines lock for "calico-20211231101408-6736", held for 21.126282924s
	I1231 10:20:58.869622  184020 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20211231101408-6736
	I1231 10:20:58.925646  184020 ssh_runner.go:195] Run: systemctl --version
	I1231 10:20:58.925720  184020 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:20:58.925722  184020 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211231101408-6736
	I1231 10:20:58.925800  184020 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211231101408-6736
	I1231 10:20:58.982780  184020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49372 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/calico-20211231101408-6736/id_rsa Username:docker}
	I1231 10:20:58.984847  184020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49372 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/calico-20211231101408-6736/id_rsa Username:docker}
	I1231 10:20:59.104792  184020 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:20:59.119852  184020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:20:59.134182  184020 docker.go:158] disabling docker service ...
	I1231 10:20:59.134263  184020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:20:59.159605  184020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:20:59.171744  184020 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:20:59.297670  184020 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:20:59.410258  184020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:20:59.422509  184020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:20:59.443509  184020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1
fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10
KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQuZCIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLnNlcnZpY2UudjEuZGlmZi1zZXJ2aWNlIl0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdjLnYxLnNjaGVkdWxlciJdCiAgICBwYXVzZV90aHJlc2hvbGQgPSAwLjAyCiAgICBkZWxldGlvbl90aHJlc2hvbGQgPSAwCiAgICBtdXRhdGlvbl90aHJlc2hvbGQgPSAxMDAKICAgIHNjaGVkdWxlX2R
lbGF5ID0gIjBzIgogICAgc3RhcnR1cF9kZWxheSA9ICIxMDBtcyIK" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:20:59.463470  184020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:20:59.472585  184020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:20:59.481457  184020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:20:59.593763  184020 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:20:59.713124  184020 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:20:59.713195  184020 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:20:59.723748  184020 start.go:458] Will wait 60s for crictl version
	I1231 10:20:59.723860  184020 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:20:59.798288  184020 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:20:59Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:21:10.848391  184020 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:21:10.885572  184020 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:21:10.885645  184020 ssh_runner.go:195] Run: containerd --version
	I1231 10:21:10.911762  184020 ssh_runner.go:195] Run: containerd --version
	I1231 10:21:10.944610  184020 out.go:176] * Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	I1231 10:21:10.944733  184020 cli_runner.go:133] Run: docker network inspect calico-20211231101408-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:21:10.994097  184020 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1231 10:21:10.999534  184020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:21:11.016929  184020 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:21:11.019401  184020 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:21:11.019493  184020 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:21:11.019564  184020 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:21:11.059340  184020 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:21:11.059367  184020 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:21:11.059414  184020 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:21:11.113002  184020 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:21:11.113032  184020 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:21:11.113090  184020 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:21:11.151390  184020 cni.go:93] Creating CNI manager for "calico"
	I1231 10:21:11.151433  184020 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:21:11.151451  184020 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20211231101408-6736 NodeName:calico-20211231101408-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var
/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:21:11.151641  184020 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "calico-20211231101408-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:21:11.151842  184020 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=calico-20211231101408-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.1 ClusterName:calico-20211231101408-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I1231 10:21:11.151951  184020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.1
	I1231 10:21:11.164152  184020 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:21:11.164336  184020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:21:11.177447  184020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (602 bytes)
	I1231 10:21:11.199631  184020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1231 10:21:11.225764  184020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I1231 10:21:11.250017  184020 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:21:11.255143  184020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:21:11.275455  184020 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736 for IP: 192.168.58.2
	I1231 10:21:11.275584  184020 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:21:11.275643  184020 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:21:11.275713  184020 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/client.key
	I1231 10:21:11.275726  184020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/client.crt with IP's: []
	I1231 10:21:11.569085  184020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/client.crt ...
	I1231 10:21:11.569191  184020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/client.crt: {Name:mka88acccdf0f2337b3523dcab34fc3ac7c78cf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:21:11.569599  184020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/client.key ...
	I1231 10:21:11.569621  184020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/client.key: {Name:mk71583c57e58e59e36cfd9d7f9774c04a468e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:21:11.569760  184020 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/apiserver.key.cee25041
	I1231 10:21:11.569792  184020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1231 10:21:11.745081  184020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/apiserver.crt.cee25041 ...
	I1231 10:21:11.745136  184020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/apiserver.crt.cee25041: {Name:mkb5bf015dffd8c17b4f4a6986b66b6630898f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:21:11.745347  184020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/apiserver.key.cee25041 ...
	I1231 10:21:11.745364  184020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/apiserver.key.cee25041: {Name:mkcd2adb76566473e9a828da9fad1638a693ab5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:21:11.745462  184020 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/apiserver.crt
	I1231 10:21:11.745529  184020 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/apiserver.key
	I1231 10:21:11.745594  184020 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/proxy-client.key
	I1231 10:21:11.745612  184020 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/proxy-client.crt with IP's: []
	I1231 10:21:11.948202  184020 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/proxy-client.crt ...
	I1231 10:21:11.948273  184020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/proxy-client.crt: {Name:mk17d0d667f6434f6c2785485c54605afeb74486 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:21:11.949311  184020 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/proxy-client.key ...
	I1231 10:21:11.949345  184020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/proxy-client.key: {Name:mk40f6c2db7295ceb1a3a4c3a38708d094e4b5f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:21:11.949565  184020 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:21:11.949601  184020 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:21:11.949608  184020 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:21:11.949630  184020 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:21:11.949652  184020 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:21:11.949670  184020 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:21:11.949710  184020 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:21:11.950746  184020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:21:11.972950  184020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1231 10:21:11.999949  184020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:21:12.026292  184020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/calico-20211231101408-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1231 10:21:12.056323  184020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:21:12.083098  184020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:21:12.112663  184020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:21:12.145013  184020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:21:12.173604  184020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:21:12.210772  184020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:21:12.234521  184020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:21:12.257060  184020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:21:12.273642  184020 ssh_runner.go:195] Run: openssl version
	I1231 10:21:12.280649  184020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:21:12.290691  184020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:21:12.294910  184020 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:21:12.294971  184020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:21:12.303964  184020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:21:12.312821  184020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:21:12.321640  184020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:21:12.325487  184020 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:21:12.325548  184020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:21:12.331308  184020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:21:12.344428  184020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:21:12.356179  184020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:21:12.360443  184020 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:21:12.360606  184020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:21:12.366856  184020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:21:12.377277  184020 kubeadm.go:388] StartCluster: {Name:calico-20211231101408-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:calico-20211231101408-6736 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker}
	I1231 10:21:12.377393  184020 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:21:12.377447  184020 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:21:12.411300  184020 cri.go:87] found id: ""
	I1231 10:21:12.411376  184020 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:21:12.424334  184020 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:21:12.433741  184020 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:21:12.433802  184020 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:21:12.448593  184020 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:21:12.448653  184020 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:21:12.819595  184020 out.go:203]   - Generating certificates and keys ...
	I1231 10:21:15.601429  184020 out.go:203]   - Booting up control plane ...
	I1231 10:21:31.164813  184020 out.go:203]   - Configuring RBAC rules ...
	I1231 10:21:31.589573  184020 cni.go:93] Creating CNI manager for "calico"
	I1231 10:21:31.592188  184020 out.go:176] * Configuring Calico (Container Networking Interface) ...
	I1231 10:21:31.592602  184020 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.1/kubectl ...
	I1231 10:21:31.592643  184020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I1231 10:21:31.613585  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:21:33.413390  184020 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.799760319s)
	I1231 10:21:33.413461  184020 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:21:33.413578  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:33.413612  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=calico-20211231101408-6736 minikube.k8s.io/updated_at=2021_12_31T10_21_33_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:33.579949  184020 ops.go:34] apiserver oom_adj: -16
	I1231 10:21:33.580059  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:34.147112  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:34.646520  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:35.147344  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:35.646697  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:36.146819  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:36.646478  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:37.146904  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:37.646544  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:38.147051  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:38.646778  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:39.146725  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:39.647532  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:40.146951  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:40.646568  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:41.146562  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:41.647341  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:42.147591  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:42.647298  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:43.146670  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:43.646808  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:44.146740  184020 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:44.619039  184020 kubeadm.go:864] duration metric: took 11.205530207s to wait for elevateKubeSystemPrivileges.
	I1231 10:21:44.619081  184020 kubeadm.go:390] StartCluster complete in 32.241813488s
	I1231 10:21:44.619104  184020 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:21:44.619214  184020 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:21:44.621864  184020 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W1231 10:21:44.689364  184020 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1231 10:21:45.692807  184020 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20211231101408-6736" rescaled to 1
	I1231 10:21:45.692893  184020 start.go:206] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:21:45.693195  184020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:21:45.693215  184020 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I1231 10:21:45.693452  184020 config.go:176] Loaded profile config "calico-20211231101408-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:21:45.696422  184020 out.go:176] * Verifying Kubernetes components...
	I1231 10:21:45.696823  184020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:21:45.696865  184020 addons.go:65] Setting default-storageclass=true in profile "calico-20211231101408-6736"
	I1231 10:21:45.696911  184020 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20211231101408-6736"
	I1231 10:21:45.697042  184020 addons.go:65] Setting storage-provisioner=true in profile "calico-20211231101408-6736"
	I1231 10:21:45.697103  184020 addons.go:153] Setting addon storage-provisioner=true in "calico-20211231101408-6736"
	W1231 10:21:45.697119  184020 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:21:45.697189  184020 host.go:66] Checking if "calico-20211231101408-6736" exists ...
	I1231 10:21:45.697542  184020 cli_runner.go:133] Run: docker container inspect calico-20211231101408-6736 --format={{.State.Status}}
	I1231 10:21:45.697691  184020 cli_runner.go:133] Run: docker container inspect calico-20211231101408-6736 --format={{.State.Status}}
	I1231 10:21:45.769205  184020 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:21:45.769415  184020 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:21:45.769432  184020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:21:45.769489  184020 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211231101408-6736
	I1231 10:21:45.778078  184020 addons.go:153] Setting addon default-storageclass=true in "calico-20211231101408-6736"
	W1231 10:21:45.778120  184020 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:21:45.778152  184020 host.go:66] Checking if "calico-20211231101408-6736" exists ...
	I1231 10:21:45.778661  184020 cli_runner.go:133] Run: docker container inspect calico-20211231101408-6736 --format={{.State.Status}}
	I1231 10:21:45.844890  184020 node_ready.go:35] waiting up to 5m0s for node "calico-20211231101408-6736" to be "Ready" ...
	I1231 10:21:45.845260  184020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:21:45.849753  184020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49372 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/calico-20211231101408-6736/id_rsa Username:docker}
	I1231 10:21:45.852506  184020 node_ready.go:49] node "calico-20211231101408-6736" has status "Ready":"True"
	I1231 10:21:45.852538  184020 node_ready.go:38] duration metric: took 7.60597ms waiting for node "calico-20211231101408-6736" to be "Ready" ...
	I1231 10:21:45.852551  184020 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1231 10:21:45.864029  184020 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:21:45.864062  184020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:21:45.864112  184020 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211231101408-6736
	I1231 10:21:45.872326  184020 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace to be "Ready" ...
	I1231 10:21:45.943762  184020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49372 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/calico-20211231101408-6736/id_rsa Username:docker}
	I1231 10:21:46.018796  184020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:21:46.201997  184020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:21:47.982754  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:21:48.195154  184020 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.349857104s)
	I1231 10:21:48.195192  184020 start.go:773] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I1231 10:21:48.485020  184020 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.46617948s)
	I1231 10:21:48.485163  184020 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.283121366s)
	I1231 10:21:48.487972  184020 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I1231 10:21:48.488050  184020 addons.go:417] enableAddons completed in 2.794841711s
	I1231 10:21:50.409296  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:21:52.909535  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:21:55.182002  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:21:57.410082  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:21:59.908049  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:02.407787  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:04.409855  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:06.907174  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:08.909893  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:11.408029  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:13.907382  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:15.907589  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:17.908970  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:20.407682  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:22.906985  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:25.408220  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:27.436090  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:29.909991  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:32.409848  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:34.411031  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:36.907945  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:39.407833  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:41.408499  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:43.907482  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:45.908460  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:47.910019  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:50.410025  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:52.907494  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:54.908378  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:57.408788  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:22:59.907525  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:01.911953  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:03.912874  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:06.408861  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:08.908134  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:11.410840  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:13.907255  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:15.909564  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:17.909795  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:20.407665  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:22.408951  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:24.908574  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:27.407003  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:29.408957  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:31.909240  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:33.909320  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:36.407785  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:38.908219  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:40.909496  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:43.407544  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:45.411689  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:47.910280  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:50.407631  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:52.408903  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:54.908422  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:57.408181  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:23:59.411789  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:01.907469  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:04.407507  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:06.407835  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:08.908382  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:11.407888  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:13.412315  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:15.908280  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:18.408474  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:20.907187  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:23.406649  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:25.407079  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:27.408805  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:29.906738  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:31.909263  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:34.407264  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:36.407496  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:38.407765  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:40.906350  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:42.907596  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:44.907903  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:47.407916  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:49.908026  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:52.407574  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:54.906961  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:56.908567  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:24:59.408312  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:01.907737  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:04.408142  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:06.907824  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:09.407112  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:11.407248  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:13.407408  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:15.407467  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:17.407664  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:19.408596  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:21.908202  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:24.407711  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:26.906588  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:28.907515  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:30.908160  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:33.407383  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:35.907603  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:37.907675  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:40.407670  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:42.907755  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:44.911432  184020 pod_ready.go:102] pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:45.911522  184020 pod_ready.go:81] duration metric: took 4m0.039154137s waiting for pod "calico-kube-controllers-8594699699-7t8mt" in "kube-system" namespace to be "Ready" ...
	E1231 10:25:45.911550  184020 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I1231 10:25:45.911560  184020 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-256pg" in "kube-system" namespace to be "Ready" ...
	I1231 10:25:47.926489  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:49.926897  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:52.424515  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:54.426508  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:56.426936  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:25:58.426977  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:00.925406  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:02.925778  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:04.926161  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:06.927495  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:08.927923  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:11.426133  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:13.426944  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:15.427966  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:17.925532  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:19.927604  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:21.928647  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:24.427511  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:26.923539  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:28.926491  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:31.425790  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:33.929361  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:36.427128  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:38.928942  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:41.426305  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:43.426405  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:45.925456  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:47.926576  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:50.425550  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:52.425737  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:54.425805  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:56.426093  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:26:58.431620  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:00.925191  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:02.927625  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:04.928411  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:07.425205  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:09.428030  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:11.929099  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:14.425337  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:16.426106  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:18.427234  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:20.925463  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:22.925599  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:25.424392  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:27.427710  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:29.429196  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:31.925707  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:33.927042  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:35.927832  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:37.928934  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:40.425053  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:42.924530  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:44.926847  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:47.426521  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:49.923722  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:51.925606  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:54.424981  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:56.425200  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:27:58.926077  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:00.928150  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:03.425226  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:05.430979  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:07.926122  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:10.427996  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:12.924700  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:14.925858  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:17.426223  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:19.427833  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:21.924810  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:24.424185  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:26.427944  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:28.925549  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:31.427409  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:33.926617  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:36.425279  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:38.924546  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:40.925253  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:43.424313  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:45.426146  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:47.924328  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:49.924581  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:51.925862  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:54.424694  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:56.424935  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:28:58.425067  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:29:00.425584  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:29:02.924736  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:29:04.924961  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:29:06.925934  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:29:09.424750  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:29:11.924545  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:29:14.424452  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:29:16.425504  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:29:18.925055  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:29:21.424203  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:29:23.425988  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:29:25.925565  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:29:28.424581  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:29:30.424821  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:29:32.924928  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:29:34.928801  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:29:36.930840  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:29:39.425312  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:29:41.924552  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:29:43.926034  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:29:45.932770  184020 pod_ready.go:102] pod "calico-node-256pg" in "kube-system" namespace has status "Ready":"False"
	I1231 10:29:45.932817  184020 pod_ready.go:81] duration metric: took 4m0.021251819s waiting for pod "calico-node-256pg" in "kube-system" namespace to be "Ready" ...
	E1231 10:29:45.932825  184020 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I1231 10:29:45.932840  184020 pod_ready.go:38] duration metric: took 8m0.080275345s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1231 10:29:45.935690  184020 out.go:176] 
	W1231 10:29:45.935904  184020 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W1231 10:29:45.935922  184020 out.go:241] * 
	* 
	W1231 10:29:45.936773  184020 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1231 10:29:45.939135  184020 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (548.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (298.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20211231101407-6736 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kindnet-20211231101407-6736 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd: exit status 80 (4m58.458538511s)

                                                
                                                
-- stdout --
	* [kindnet-20211231101407-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node kindnet-20211231101407-6736 in cluster kindnet-20211231101407-6736
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	  - kubelet.global-housekeeping-interval=60m
	  - kubelet.housekeeping-interval=5m
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1231 10:20:58.773220  187480 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:20:58.773339  187480 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:20:58.773347  187480 out.go:310] Setting ErrFile to fd 2...
	I1231 10:20:58.773353  187480 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:20:58.773497  187480 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:20:58.773950  187480 out.go:304] Setting JSON to false
	I1231 10:20:58.776155  187480 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3813,"bootTime":1640942245,"procs":771,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:20:58.776289  187480 start.go:122] virtualization: kvm guest
	I1231 10:20:58.781933  187480 out.go:176] * [kindnet-20211231101407-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:20:58.785801  187480 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:20:58.782219  187480 notify.go:174] Checking for updates...
	I1231 10:20:58.788942  187480 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:20:58.791757  187480 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:20:58.794638  187480 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:20:58.797754  187480 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:20:58.798431  187480 config.go:176] Loaded profile config "calico-20211231101408-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:20:58.798556  187480 config.go:176] Loaded profile config "cilium-20211231101408-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:20:58.798669  187480 config.go:176] Loaded profile config "enable-default-cni-20211231101406-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:20:58.798739  187480 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:20:58.849369  187480 docker.go:132] docker version: linux-20.10.12
	I1231 10:20:58.849501  187480 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:20:59.005852  187480 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2021-12-31 10:20:58.895847116 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:20:59.005975  187480 docker.go:237] overlay module found
	I1231 10:20:59.009106  187480 out.go:176] * Using the docker driver based on user configuration
	I1231 10:20:59.009152  187480 start.go:280] selected driver: docker
	I1231 10:20:59.009160  187480 start.go:795] validating driver "docker" against <nil>
	I1231 10:20:59.009188  187480 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:20:59.009230  187480 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:20:59.009238  187480 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:20:59.009275  187480 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:20:59.009312  187480 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I1231 10:20:59.012433  187480 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:20:59.013577  187480 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:20:59.155410  187480 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2021-12-31 10:20:59.060550063 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:20:59.155565  187480 start_flags.go:284] no existing cluster config was found, will generate one from the flags 
	I1231 10:20:59.155797  187480 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1231 10:20:59.155834  187480 cni.go:93] Creating CNI manager for "kindnet"
	I1231 10:20:59.155848  187480 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1231 10:20:59.155854  187480 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1231 10:20:59.155863  187480 start_flags.go:293] Found "CNI" CNI - setting NetworkPlugin=cni
	I1231 10:20:59.155876  187480 start_flags.go:298] config:
	{Name:kindnet-20211231101407-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:kindnet-20211231101407-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:20:59.158965  187480 out.go:176] * Starting control plane node kindnet-20211231101407-6736 in cluster kindnet-20211231101407-6736
	I1231 10:20:59.159041  187480 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:20:59.161269  187480 out.go:176] * Pulling base image ...
	I1231 10:20:59.161313  187480 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:20:59.161343  187480 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:20:59.161350  187480 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4
	I1231 10:20:59.161550  187480 cache.go:57] Caching tarball of preloaded images
	I1231 10:20:59.161835  187480 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:20:59.161866  187480 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.1 on containerd
	I1231 10:20:59.162022  187480 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/config.json ...
	I1231 10:20:59.162072  187480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/config.json: {Name:mk104cb3c5b8e604eebedcd705efe8a432304499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:20:59.230081  187480 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:20:59.230119  187480 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:20:59.230158  187480 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:20:59.230200  187480 start.go:313] acquiring machines lock for kindnet-20211231101407-6736: {Name:mk1d10d5e12143403f7933c6ee150a18df6f8dd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:20:59.230367  187480 start.go:317] acquired machines lock for "kindnet-20211231101407-6736" in 139.298µs
	I1231 10:20:59.230410  187480 start.go:89] Provisioning new machine with config: &{Name:kindnet-20211231101407-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:kindnet-20211231101407-6736 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker} &{Name: IP: Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:20:59.230526  187480 start.go:126] createHost starting for "" (driver="docker")
	I1231 10:20:59.235064  187480 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1231 10:20:59.235364  187480 start.go:160] libmachine.API.Create for "kindnet-20211231101407-6736" (driver="docker")
	I1231 10:20:59.235404  187480 client.go:168] LocalClient.Create starting
	I1231 10:20:59.235489  187480 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem
	I1231 10:20:59.235532  187480 main.go:130] libmachine: Decoding PEM data...
	I1231 10:20:59.235557  187480 main.go:130] libmachine: Parsing certificate...
	I1231 10:20:59.235642  187480 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem
	I1231 10:20:59.235668  187480 main.go:130] libmachine: Decoding PEM data...
	I1231 10:20:59.235683  187480 main.go:130] libmachine: Parsing certificate...
	I1231 10:20:59.236129  187480 cli_runner.go:133] Run: docker network inspect kindnet-20211231101407-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1231 10:20:59.285889  187480 cli_runner.go:180] docker network inspect kindnet-20211231101407-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1231 10:20:59.285974  187480 network_create.go:254] running [docker network inspect kindnet-20211231101407-6736] to gather additional debugging logs...
	I1231 10:20:59.286010  187480 cli_runner.go:133] Run: docker network inspect kindnet-20211231101407-6736
	W1231 10:20:59.335565  187480 cli_runner.go:180] docker network inspect kindnet-20211231101407-6736 returned with exit code 1
	I1231 10:20:59.335611  187480 network_create.go:257] error running [docker network inspect kindnet-20211231101407-6736]: docker network inspect kindnet-20211231101407-6736: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20211231101407-6736
	I1231 10:20:59.335627  187480 network_create.go:259] output of [docker network inspect kindnet-20211231101407-6736]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20211231101407-6736
	
	** /stderr **
	I1231 10:20:59.335693  187480 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:20:59.385072  187480 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001aa540] misses:0}
	I1231 10:20:59.385129  187480 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1231 10:20:59.385148  187480 network_create.go:106] attempt to create docker network kindnet-20211231101407-6736 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1231 10:20:59.385211  187480 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20211231101407-6736
	I1231 10:20:59.483863  187480 network_create.go:90] docker network kindnet-20211231101407-6736 192.168.49.0/24 created
	I1231 10:20:59.483912  187480 kic.go:106] calculated static IP "192.168.49.2" for the "kindnet-20211231101407-6736" container
	I1231 10:20:59.483988  187480 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I1231 10:20:59.531728  187480 cli_runner.go:133] Run: docker volume create kindnet-20211231101407-6736 --label name.minikube.sigs.k8s.io=kindnet-20211231101407-6736 --label created_by.minikube.sigs.k8s.io=true
	I1231 10:20:59.586525  187480 oci.go:102] Successfully created a docker volume kindnet-20211231101407-6736
	I1231 10:20:59.586632  187480 cli_runner.go:133] Run: docker run --rm --name kindnet-20211231101407-6736-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20211231101407-6736 --entrypoint /usr/bin/test -v kindnet-20211231101407-6736:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib
	I1231 10:21:00.343020  187480 oci.go:106] Successfully prepared a docker volume kindnet-20211231101407-6736
	I1231 10:21:00.343091  187480 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:21:00.343113  187480 kic.go:179] Starting extracting preloaded images to volume ...
	I1231 10:21:00.343184  187480 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20211231101407-6736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir
	I1231 10:21:10.276111  187480 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20211231101407-6736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir: (9.932841889s)
	I1231 10:21:10.276165  187480 kic.go:188] duration metric: took 9.933047 seconds to extract preloaded images to volume
	W1231 10:21:10.276472  187480 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1231 10:21:10.276509  187480 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I1231 10:21:10.276589  187480 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1231 10:21:10.413345  187480 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20211231101407-6736 --name kindnet-20211231101407-6736 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20211231101407-6736 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20211231101407-6736 --network kindnet-20211231101407-6736 --ip 192.168.49.2 --volume kindnet-20211231101407-6736:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b
	I1231 10:21:10.897447  187480 cli_runner.go:133] Run: docker container inspect kindnet-20211231101407-6736 --format={{.State.Running}}
	I1231 10:21:10.948554  187480 cli_runner.go:133] Run: docker container inspect kindnet-20211231101407-6736 --format={{.State.Status}}
	I1231 10:21:10.996433  187480 cli_runner.go:133] Run: docker exec kindnet-20211231101407-6736 stat /var/lib/dpkg/alternatives/iptables
	I1231 10:21:11.119413  187480 oci.go:175] the created container "kindnet-20211231101407-6736" has a running status.
	I1231 10:21:11.119447  187480 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/kindnet-20211231101407-6736/id_rsa...
	I1231 10:21:11.301527  187480 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/kindnet-20211231101407-6736/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1231 10:21:11.442402  187480 cli_runner.go:133] Run: docker container inspect kindnet-20211231101407-6736 --format={{.State.Status}}
	I1231 10:21:11.494092  187480 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1231 10:21:11.494126  187480 kic_runner.go:114] Args: [docker exec --privileged kindnet-20211231101407-6736 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1231 10:21:11.634250  187480 cli_runner.go:133] Run: docker container inspect kindnet-20211231101407-6736 --format={{.State.Status}}
	I1231 10:21:11.685845  187480 machine.go:88] provisioning docker machine ...
	I1231 10:21:11.685891  187480 ubuntu.go:169] provisioning hostname "kindnet-20211231101407-6736"
	I1231 10:21:11.685960  187480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211231101407-6736
	I1231 10:21:11.730898  187480 main.go:130] libmachine: Using SSH client type: native
	I1231 10:21:11.731170  187480 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49377 <nil> <nil>}
	I1231 10:21:11.731198  187480 main.go:130] libmachine: About to run SSH command:
	sudo hostname kindnet-20211231101407-6736 && echo "kindnet-20211231101407-6736" | sudo tee /etc/hostname
	I1231 10:21:11.885466  187480 main.go:130] libmachine: SSH cmd err, output: <nil>: kindnet-20211231101407-6736
	
	I1231 10:21:11.885550  187480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211231101407-6736
	I1231 10:21:11.931702  187480 main.go:130] libmachine: Using SSH client type: native
	I1231 10:21:11.931935  187480 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49377 <nil> <nil>}
	I1231 10:21:11.931968  187480 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-20211231101407-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-20211231101407-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-20211231101407-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:21:12.080900  187480 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:21:12.080944  187480 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:21:12.080969  187480 ubuntu.go:177] setting up certificates
	I1231 10:21:12.080981  187480 provision.go:83] configureAuth start
	I1231 10:21:12.081051  187480 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20211231101407-6736
	I1231 10:21:12.138911  187480 provision.go:138] copyHostCerts
	I1231 10:21:12.138986  187480 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:21:12.138998  187480 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:21:12.139092  187480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:21:12.139211  187480 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:21:12.139227  187480 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:21:12.139279  187480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:21:12.139353  187480 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:21:12.139359  187480 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:21:12.139383  187480 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:21:12.139438  187480 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.kindnet-20211231101407-6736 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-20211231101407-6736]
	I1231 10:21:12.207359  187480 provision.go:172] copyRemoteCerts
	I1231 10:21:12.207452  187480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:21:12.207511  187480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211231101407-6736
	I1231 10:21:12.249903  187480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49377 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/kindnet-20211231101407-6736/id_rsa Username:docker}
	I1231 10:21:12.352592  187480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:21:12.376565  187480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1231 10:21:12.405504  187480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1231 10:21:12.428967  187480 provision.go:86] duration metric: configureAuth took 347.963654ms
	I1231 10:21:12.429008  187480 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:21:12.429229  187480 config.go:176] Loaded profile config "kindnet-20211231101407-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:21:12.429251  187480 machine.go:91] provisioned docker machine in 743.380374ms
	I1231 10:21:12.429260  187480 client.go:171] LocalClient.Create took 13.193846346s
	I1231 10:21:12.429280  187480 start.go:168] duration metric: libmachine.API.Create for "kindnet-20211231101407-6736" took 13.193917704s
	I1231 10:21:12.429295  187480 start.go:267] post-start starting for "kindnet-20211231101407-6736" (driver="docker")
	I1231 10:21:12.429306  187480 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:21:12.429382  187480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:21:12.429427  187480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211231101407-6736
	I1231 10:21:12.477580  187480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49377 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/kindnet-20211231101407-6736/id_rsa Username:docker}
	I1231 10:21:12.578265  187480 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:21:12.582295  187480 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:21:12.582334  187480 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:21:12.582352  187480 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:21:12.582360  187480 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:21:12.582373  187480 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:21:12.582452  187480 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:21:12.582559  187480 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:21:12.582687  187480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:21:12.592319  187480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:21:12.619070  187480 start.go:270] post-start completed in 189.756068ms
	I1231 10:21:12.619522  187480 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20211231101407-6736
	I1231 10:21:12.666302  187480 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/config.json ...
	I1231 10:21:12.666610  187480 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:21:12.666662  187480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211231101407-6736
	I1231 10:21:12.714462  187480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49377 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/kindnet-20211231101407-6736/id_rsa Username:docker}
	I1231 10:21:12.814517  187480 start.go:129] duration metric: createHost completed in 13.5839775s
	I1231 10:21:12.814551  187480 start.go:80] releasing machines lock for "kindnet-20211231101407-6736", held for 13.584165255s
	I1231 10:21:12.814774  187480 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20211231101407-6736
	I1231 10:21:12.855988  187480 ssh_runner.go:195] Run: systemctl --version
	I1231 10:21:12.856039  187480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211231101407-6736
	I1231 10:21:12.856049  187480 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:21:12.856108  187480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211231101407-6736
	I1231 10:21:12.896663  187480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49377 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/kindnet-20211231101407-6736/id_rsa Username:docker}
	I1231 10:21:12.898851  187480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49377 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/kindnet-20211231101407-6736/id_rsa Username:docker}
	I1231 10:21:12.989039  187480 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:21:13.013862  187480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:21:13.024611  187480 docker.go:158] disabling docker service ...
	I1231 10:21:13.024673  187480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:21:13.046716  187480 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:21:13.058745  187480 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:21:13.162496  187480 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:21:13.255805  187480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:21:13.268113  187480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:21:13.283981  187480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1
fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10
KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9
kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:21:13.300407  187480 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:21:13.309051  187480 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:21:13.318775  187480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:21:13.406586  187480 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:21:13.496527  187480 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:21:13.496600  187480 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:21:13.501677  187480 start.go:458] Will wait 60s for crictl version
	I1231 10:21:13.501792  187480 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:21:13.536369  187480 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:21:13Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:21:24.583347  187480 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:21:24.646714  187480 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:21:24.646799  187480 ssh_runner.go:195] Run: containerd --version
	I1231 10:21:24.670982  187480 ssh_runner.go:195] Run: containerd --version
	I1231 10:21:24.710875  187480 out.go:176] * Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	I1231 10:21:24.710983  187480 cli_runner.go:133] Run: docker network inspect kindnet-20211231101407-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:21:24.768551  187480 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1231 10:21:24.773337  187480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:21:24.795995  187480 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:21:24.798887  187480 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:21:24.801760  187480 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:21:24.801863  187480 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:21:24.801948  187480 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:21:24.841591  187480 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:21:24.841648  187480 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:21:24.841785  187480 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:21:24.873718  187480 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:21:24.873746  187480 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:21:24.873798  187480 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:21:24.932172  187480 cni.go:93] Creating CNI manager for "kindnet"
	I1231 10:21:24.932215  187480 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:21:24.932308  187480 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-20211231101407-6736 NodeName:kindnet-20211231101407-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/v
ar/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:21:24.932479  187480 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "kindnet-20211231101407-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:21:24.932601  187480 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=kindnet-20211231101407-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.1 ClusterName:kindnet-20211231101407-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I1231 10:21:24.932671  187480 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.1
	I1231 10:21:24.947446  187480 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:21:24.947524  187480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:21:24.956670  187480 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (634 bytes)
	I1231 10:21:24.973328  187480 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1231 10:21:25.008845  187480 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I1231 10:21:25.045916  187480 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:21:25.056137  187480 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:21:25.069204  187480 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736 for IP: 192.168.49.2
	I1231 10:21:25.069325  187480 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:21:25.069379  187480 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:21:25.069449  187480 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/client.key
	I1231 10:21:25.069467  187480 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/client.crt with IP's: []
	I1231 10:21:25.384336  187480 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/client.crt ...
	I1231 10:21:25.384384  187480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/client.crt: {Name:mkafe944a9e523ae22b385787ca0736477f09952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:21:25.384610  187480 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/client.key ...
	I1231 10:21:25.384633  187480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/client.key: {Name:mk9d101e45337dfe0aa88873d7212ec2febc9bf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:21:25.384865  187480 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/apiserver.key.dd3b5fb2
	I1231 10:21:25.384901  187480 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1231 10:21:25.883531  187480 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/apiserver.crt.dd3b5fb2 ...
	I1231 10:21:25.883581  187480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/apiserver.crt.dd3b5fb2: {Name:mkfc3b66ea7f99d4b805b656e89f3f44c1e4cdc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:21:25.883846  187480 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/apiserver.key.dd3b5fb2 ...
	I1231 10:21:25.883869  187480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/apiserver.key.dd3b5fb2: {Name:mk63001757e2e53a620664f276fa49c10ae3a4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:21:25.883990  187480 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/apiserver.crt
	I1231 10:21:25.884070  187480 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/apiserver.key
	I1231 10:21:25.884137  187480 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/proxy-client.key
	I1231 10:21:25.884162  187480 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/proxy-client.crt with IP's: []
	I1231 10:21:26.041780  187480 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/proxy-client.crt ...
	I1231 10:21:26.041825  187480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/proxy-client.crt: {Name:mkb836e70e105e01939ac7b16b2db89db7a5eedd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:21:26.042064  187480 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/proxy-client.key ...
	I1231 10:21:26.042085  187480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/proxy-client.key: {Name:mk8943eb9b0f3e54b65980e263cbabd40edb6d8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:21:26.042294  187480 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:21:26.042346  187480 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:21:26.042364  187480 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:21:26.042401  187480 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:21:26.042439  187480 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:21:26.042480  187480 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:21:26.042544  187480 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:21:26.044262  187480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:21:26.074455  187480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1231 10:21:26.104442  187480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:21:26.131940  187480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/kindnet-20211231101407-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1231 10:21:26.169697  187480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:21:26.196618  187480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:21:26.223970  187480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:21:26.256345  187480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:21:26.285773  187480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:21:26.316689  187480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:21:26.346973  187480 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:21:26.372737  187480 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:21:26.396852  187480 ssh_runner.go:195] Run: openssl version
	I1231 10:21:26.404074  187480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:21:26.417697  187480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:21:26.426734  187480 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:21:26.426826  187480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:21:26.438441  187480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:21:26.453232  187480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:21:26.465070  187480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:21:26.470586  187480 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:21:26.470677  187480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:21:26.493306  187480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:21:26.505982  187480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:21:26.519546  187480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:21:26.524854  187480 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:21:26.524929  187480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:21:26.532797  187480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:21:26.545767  187480 kubeadm.go:388] StartCluster: {Name:kindnet-20211231101407-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:kindnet-20211231101407-6736 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:21:26.545914  187480 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:21:26.545974  187480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:21:26.615762  187480 cri.go:87] found id: ""
	I1231 10:21:26.615939  187480 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:21:26.629146  187480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:21:26.641010  187480 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:21:26.641085  187480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:21:26.650155  187480 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:21:26.650297  187480 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:21:27.132776  187480 out.go:203]   - Generating certificates and keys ...
	I1231 10:21:30.372857  187480 out.go:203]   - Booting up control plane ...
	I1231 10:21:43.946209  187480 out.go:203]   - Configuring RBAC rules ...
	I1231 10:21:44.444348  187480 cni.go:93] Creating CNI manager for "kindnet"
	I1231 10:21:44.446962  187480 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:21:44.447062  187480 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:21:44.453056  187480 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.1/kubectl ...
	I1231 10:21:44.453083  187480 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:21:44.471752  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:21:45.609896  187480 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.138100678s)
	I1231 10:21:45.609951  187480 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:21:45.610075  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:45.610175  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=kindnet-20211231101407-6736 minikube.k8s.io/updated_at=2021_12_31T10_21_45_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:45.635925  187480 ops.go:34] apiserver oom_adj: -16
	I1231 10:21:45.751100  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:46.377541  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:46.877119  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:47.377489  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:47.877947  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:48.377576  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:48.877917  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:49.377301  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:49.877695  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:50.377320  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:50.877223  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:51.377684  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:51.877489  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:52.377529  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:52.877231  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:53.377502  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:53.876956  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:54.377536  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:54.877069  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:55.377196  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:55.877100  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:56.377162  187480 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:21:56.446348  187480 kubeadm.go:864] duration metric: took 10.836311157s to wait for elevateKubeSystemPrivileges.
	I1231 10:21:56.446395  187480 kubeadm.go:390] StartCluster complete in 29.900641078s
	I1231 10:21:56.446413  187480 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:21:56.446504  187480 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:21:56.448621  187480 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:21:56.972923  187480 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20211231101407-6736" rescaled to 1
	I1231 10:21:56.972997  187480 start.go:206] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:21:56.973033  187480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:21:56.976122  187480 out.go:176] * Verifying Kubernetes components...
	I1231 10:21:56.976187  187480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:21:56.973110  187480 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I1231 10:21:56.973293  187480 config.go:176] Loaded profile config "kindnet-20211231101407-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:21:56.976315  187480 addons.go:65] Setting storage-provisioner=true in profile "kindnet-20211231101407-6736"
	I1231 10:21:56.976340  187480 addons.go:153] Setting addon storage-provisioner=true in "kindnet-20211231101407-6736"
	W1231 10:21:56.976354  187480 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:21:56.976371  187480 addons.go:65] Setting default-storageclass=true in profile "kindnet-20211231101407-6736"
	I1231 10:21:56.976387  187480 host.go:66] Checking if "kindnet-20211231101407-6736" exists ...
	I1231 10:21:56.976409  187480 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20211231101407-6736"
	I1231 10:21:56.976763  187480 cli_runner.go:133] Run: docker container inspect kindnet-20211231101407-6736 --format={{.State.Status}}
	I1231 10:21:56.976971  187480 cli_runner.go:133] Run: docker container inspect kindnet-20211231101407-6736 --format={{.State.Status}}
	I1231 10:21:57.035287  187480 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:21:57.035466  187480 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:21:57.035514  187480 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:21:57.035584  187480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211231101407-6736
	I1231 10:21:57.036151  187480 addons.go:153] Setting addon default-storageclass=true in "kindnet-20211231101407-6736"
	W1231 10:21:57.036182  187480 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:21:57.036206  187480 host.go:66] Checking if "kindnet-20211231101407-6736" exists ...
	I1231 10:21:57.036733  187480 cli_runner.go:133] Run: docker container inspect kindnet-20211231101407-6736 --format={{.State.Status}}
	I1231 10:21:57.100146  187480 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:21:57.103002  187480 node_ready.go:35] waiting up to 5m0s for node "kindnet-20211231101407-6736" to be "Ready" ...
	I1231 10:21:57.156173  187480 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:21:57.156219  187480 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:21:57.156599  187480 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211231101407-6736
	I1231 10:21:57.158886  187480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49377 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/kindnet-20211231101407-6736/id_rsa Username:docker}
	I1231 10:21:57.231921  187480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49377 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/kindnet-20211231101407-6736/id_rsa Username:docker}
	I1231 10:21:57.398595  187480 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:21:57.504659  187480 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:21:57.581806  187480 start.go:773] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I1231 10:21:58.089103  187480 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I1231 10:21:58.089144  187480 addons.go:417] enableAddons completed in 1.116057712s
	I1231 10:21:59.121286  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:01.624675  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:04.120421  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:06.121070  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:08.621487  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:11.120428  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:13.122526  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:15.124224  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:17.620476  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:20.121144  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:22.621821  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:25.121761  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:27.620302  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:29.621200  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:32.122130  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:34.621511  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:37.120674  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:39.120970  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:41.620467  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:44.120581  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:46.121667  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:48.620859  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:51.120807  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:53.120865  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:55.120969  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:57.122142  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:22:59.122337  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:01.620409  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:04.120358  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:06.121183  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:08.121237  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:10.620747  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:13.121451  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:15.620750  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:18.120015  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:20.120624  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:22.619730  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:24.620544  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:27.121026  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:29.121564  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:31.621425  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:34.121404  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:36.621080  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:39.120932  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:41.619913  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:43.620200  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:45.621989  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:48.120583  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:50.120857  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:52.621752  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:55.120527  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:57.120620  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:23:59.120823  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:01.620107  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:04.120901  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:06.120964  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:08.620819  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:11.120290  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:13.123107  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:15.620138  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:17.620510  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:20.120342  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:22.120633  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:24.621049  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:27.121114  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:29.621707  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:32.120560  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:34.620282  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:36.620988  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:39.122723  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:41.620579  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:44.120656  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:46.121068  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:48.621098  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:51.120988  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:53.121060  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:55.620582  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:24:58.120891  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:00.620722  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:03.120492  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:05.620296  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:07.621138  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:10.121865  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:12.620716  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:15.121256  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:17.620219  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:19.620984  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:22.120562  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:24.121312  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:26.623167  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:29.119909  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:31.121155  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:33.621540  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:35.622456  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:38.120905  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:40.121578  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:42.122277  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:44.620521  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:46.620635  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:48.621989  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:51.120589  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:53.620668  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:55.621106  187480 node_ready.go:58] node "kindnet-20211231101407-6736" has status "Ready":"False"
	I1231 10:25:57.122662  187480 node_ready.go:38] duration metric: took 4m0.019620789s waiting for node "kindnet-20211231101407-6736" to be "Ready" ...
	I1231 10:25:57.125682  187480 out.go:176] 
	W1231 10:25:57.125825  187480 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:25:57.125838  187480 out.go:241] * 
	* 
	W1231 10:25:57.126601  187480 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1231 10:25:57.129104  187480 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (298.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (343.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.161354483s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.142413601s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.141996336s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default
E1231 10:24:41.557698    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.139347229s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.153947828s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default
E1231 10:25:10.310168    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
E1231 10:25:22.104878    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory
E1231 10:25:22.110213    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory
E1231 10:25:22.120517    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory
E1231 10:25:22.140848    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory
E1231 10:25:22.181129    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory
E1231 10:25:22.262003    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory
E1231 10:25:22.422458    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory
E1231 10:25:22.743048    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.144180879s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1231 10:25:23.384019    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory
E1231 10:25:24.664356    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory
E1231 10:25:27.225160    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory
E1231 10:25:32.345478    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default
E1231 10:25:36.756998    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory
E1231 10:25:36.762298    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory
E1231 10:25:36.772561    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory
E1231 10:25:36.792920    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory
E1231 10:25:36.833256    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory
E1231 10:25:36.913619    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory
E1231 10:25:37.074038    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory
E1231 10:25:37.394699    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory
E1231 10:25:38.035174    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory
E1231 10:25:39.316165    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory
E1231 10:25:41.876804    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory
E1231 10:25:42.585823    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory
E1231 10:25:46.997337    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140532254s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.157459807s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.154857163s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146931004s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default
E1231 10:28:05.948543    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137325092s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1231 10:28:20.600385    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory
E1231 10:28:21.234672    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.171482457s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (343.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (298.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20211231102602-6736 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E1231 10:26:03.066858    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-20211231102602-6736 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: exit status 80 (4m56.029628674s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20211231102602-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node old-k8s-version-20211231102602-6736 in cluster old-k8s-version-20211231102602-6736
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on containerd 1.4.12 ...
	  - kubelet.global-housekeeping-interval=60m
	  - kubelet.housekeeping-interval=5m
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1231 10:26:02.667270  207964 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:26:02.667383  207964 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:26:02.667387  207964 out.go:310] Setting ErrFile to fd 2...
	I1231 10:26:02.667391  207964 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:26:02.667500  207964 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:26:02.667784  207964 out.go:304] Setting JSON to false
	I1231 10:26:02.669693  207964 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4117,"bootTime":1640942245,"procs":737,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:26:02.669785  207964 start.go:122] virtualization: kvm guest
	I1231 10:26:02.673719  207964 out.go:176] * [old-k8s-version-20211231102602-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:26:02.676686  207964 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:26:02.673987  207964 notify.go:174] Checking for updates...
	I1231 10:26:02.679132  207964 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:26:02.681529  207964 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:26:02.684141  207964 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:26:02.686825  207964 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:26:02.687614  207964 config.go:176] Loaded profile config "bridge-20211231101406-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:26:02.687745  207964 config.go:176] Loaded profile config "calico-20211231101408-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:26:02.688006  207964 config.go:176] Loaded profile config "enable-default-cni-20211231101406-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:26:02.688138  207964 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:26:02.737113  207964 docker.go:132] docker version: linux-20.10.12
	I1231 10:26:02.737267  207964 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:26:02.858176  207964 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2021-12-31 10:26:02.783935334 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:26:02.858338  207964 docker.go:237] overlay module found
	I1231 10:26:02.861911  207964 out.go:176] * Using the docker driver based on user configuration
	I1231 10:26:02.862085  207964 start.go:280] selected driver: docker
	I1231 10:26:02.862098  207964 start.go:795] validating driver "docker" against <nil>
	I1231 10:26:02.862122  207964 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:26:02.862286  207964 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:26:02.862299  207964 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:26:02.862366  207964 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:26:02.862403  207964 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I1231 10:26:02.865296  207964 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:26:02.866078  207964 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:26:02.983362  207964 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2021-12-31 10:26:02.905175177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:26:02.983513  207964 start_flags.go:284] no existing cluster config was found, will generate one from the flags 
	I1231 10:26:02.983674  207964 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1231 10:26:02.983693  207964 cni.go:93] Creating CNI manager for ""
	I1231 10:26:02.983698  207964 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:26:02.983705  207964 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1231 10:26:02.983712  207964 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1231 10:26:02.983720  207964 start_flags.go:293] Found "CNI" CNI - setting NetworkPlugin=cni
	I1231 10:26:02.983728  207964 start_flags.go:298] config:
	{Name:old-k8s-version-20211231102602-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20211231102602-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:26:02.987355  207964 out.go:176] * Starting control plane node old-k8s-version-20211231102602-6736 in cluster old-k8s-version-20211231102602-6736
	I1231 10:26:02.987463  207964 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:26:02.990086  207964 out.go:176] * Pulling base image ...
	I1231 10:26:02.990140  207964 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1231 10:26:02.990206  207964 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I1231 10:26:02.990264  207964 cache.go:57] Caching tarball of preloaded images
	I1231 10:26:02.990246  207964 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:26:02.990515  207964 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:26:02.990560  207964 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I1231 10:26:02.990725  207964 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/config.json ...
	I1231 10:26:02.990753  207964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/config.json: {Name:mk66b068564c95174c7b8024df4647d6bc441cf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:26:03.034340  207964 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:26:03.034390  207964 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:26:03.034412  207964 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:26:03.034452  207964 start.go:313] acquiring machines lock for old-k8s-version-20211231102602-6736: {Name:mk363b8d877fe23a69d731c391a1b6f4ce841b33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:26:03.034674  207964 start.go:317] acquired machines lock for "old-k8s-version-20211231102602-6736" in 179.403µs
	I1231 10:26:03.034722  207964 start.go:89] Provisioning new machine with config: &{Name:old-k8s-version-20211231102602-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20211231102602-6736 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ControlPlane:true Worker:true}
	I1231 10:26:03.034820  207964 start.go:126] createHost starting for "" (driver="docker")
	I1231 10:26:03.039435  207964 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1231 10:26:03.039726  207964 start.go:160] libmachine.API.Create for "old-k8s-version-20211231102602-6736" (driver="docker")
	I1231 10:26:03.039769  207964 client.go:168] LocalClient.Create starting
	I1231 10:26:03.039861  207964 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem
	I1231 10:26:03.039901  207964 main.go:130] libmachine: Decoding PEM data...
	I1231 10:26:03.039926  207964 main.go:130] libmachine: Parsing certificate...
	I1231 10:26:03.040022  207964 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem
	I1231 10:26:03.040051  207964 main.go:130] libmachine: Decoding PEM data...
	I1231 10:26:03.040067  207964 main.go:130] libmachine: Parsing certificate...
	I1231 10:26:03.040469  207964 cli_runner.go:133] Run: docker network inspect old-k8s-version-20211231102602-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1231 10:26:03.077382  207964 cli_runner.go:180] docker network inspect old-k8s-version-20211231102602-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1231 10:26:03.077472  207964 network_create.go:254] running [docker network inspect old-k8s-version-20211231102602-6736] to gather additional debugging logs...
	I1231 10:26:03.077487  207964 cli_runner.go:133] Run: docker network inspect old-k8s-version-20211231102602-6736
	W1231 10:26:03.113340  207964 cli_runner.go:180] docker network inspect old-k8s-version-20211231102602-6736 returned with exit code 1
	I1231 10:26:03.113375  207964 network_create.go:257] error running [docker network inspect old-k8s-version-20211231102602-6736]: docker network inspect old-k8s-version-20211231102602-6736: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20211231102602-6736
	I1231 10:26:03.113390  207964 network_create.go:259] output of [docker network inspect old-k8s-version-20211231102602-6736]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20211231102602-6736
	
	** /stderr **
	I1231 10:26:03.113443  207964 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:26:03.151415  207964 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000698380] misses:0}
	I1231 10:26:03.151467  207964 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1231 10:26:03.151483  207964 network_create.go:106] attempt to create docker network old-k8s-version-20211231102602-6736 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1231 10:26:03.151526  207964 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20211231102602-6736
	I1231 10:26:03.242629  207964 network_create.go:90] docker network old-k8s-version-20211231102602-6736 192.168.49.0/24 created
	I1231 10:26:03.242696  207964 kic.go:106] calculated static IP "192.168.49.2" for the "old-k8s-version-20211231102602-6736" container
	I1231 10:26:03.242749  207964 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I1231 10:26:03.281009  207964 cli_runner.go:133] Run: docker volume create old-k8s-version-20211231102602-6736 --label name.minikube.sigs.k8s.io=old-k8s-version-20211231102602-6736 --label created_by.minikube.sigs.k8s.io=true
	I1231 10:26:03.322904  207964 oci.go:102] Successfully created a docker volume old-k8s-version-20211231102602-6736
	I1231 10:26:03.322993  207964 cli_runner.go:133] Run: docker run --rm --name old-k8s-version-20211231102602-6736-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20211231102602-6736 --entrypoint /usr/bin/test -v old-k8s-version-20211231102602-6736:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib
	I1231 10:26:03.986432  207964 oci.go:106] Successfully prepared a docker volume old-k8s-version-20211231102602-6736
	I1231 10:26:03.986507  207964 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1231 10:26:03.986526  207964 kic.go:179] Starting extracting preloaded images to volume ...
	I1231 10:26:03.986583  207964 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.16.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20211231102602-6736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir
	I1231 10:26:13.327618  207964 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.16.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20211231102602-6736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir: (9.340995655s)
	I1231 10:26:13.327668  207964 kic.go:188] duration metric: took 9.341138 seconds to extract preloaded images to volume
	W1231 10:26:13.327720  207964 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1231 10:26:13.327734  207964 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I1231 10:26:13.327799  207964 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1231 10:26:13.470034  207964 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20211231102602-6736 --name old-k8s-version-20211231102602-6736 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20211231102602-6736 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20211231102602-6736 --network old-k8s-version-20211231102602-6736 --ip 192.168.49.2 --volume old-k8s-version-20211231102602-6736:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b
	I1231 10:26:13.996671  207964 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Running}}
	I1231 10:26:14.059181  207964 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:26:14.105848  207964 cli_runner.go:133] Run: docker exec old-k8s-version-20211231102602-6736 stat /var/lib/dpkg/alternatives/iptables
	I1231 10:26:14.215191  207964 oci.go:175] the created container "old-k8s-version-20211231102602-6736" has a running status.
	I1231 10:26:14.215230  207964 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa...
	I1231 10:26:14.428683  207964 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1231 10:26:14.555557  207964 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:26:14.603536  207964 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1231 10:26:14.603561  207964 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20211231102602-6736 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1231 10:26:14.730612  207964 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:26:14.806262  207964 machine.go:88] provisioning docker machine ...
	I1231 10:26:14.806313  207964 ubuntu.go:169] provisioning hostname "old-k8s-version-20211231102602-6736"
	I1231 10:26:14.806387  207964 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:26:14.859065  207964 main.go:130] libmachine: Using SSH client type: native
	I1231 10:26:14.859293  207964 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49387 <nil> <nil>}
	I1231 10:26:14.859313  207964 main.go:130] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20211231102602-6736 && echo "old-k8s-version-20211231102602-6736" | sudo tee /etc/hostname
	I1231 10:26:15.022670  207964 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20211231102602-6736
	
	I1231 10:26:15.022774  207964 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:26:15.071116  207964 main.go:130] libmachine: Using SSH client type: native
	I1231 10:26:15.071335  207964 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49387 <nil> <nil>}
	I1231 10:26:15.071380  207964 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20211231102602-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20211231102602-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20211231102602-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:26:15.225634  207964 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:26:15.225678  207964 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:26:15.225714  207964 ubuntu.go:177] setting up certificates
	I1231 10:26:15.225724  207964 provision.go:83] configureAuth start
	I1231 10:26:15.225777  207964 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20211231102602-6736
	I1231 10:26:15.287143  207964 provision.go:138] copyHostCerts
	I1231 10:26:15.287220  207964 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:26:15.287231  207964 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:26:15.287302  207964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:26:15.287399  207964 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:26:15.287431  207964 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:26:15.287470  207964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:26:15.287537  207964 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:26:15.287548  207964 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:26:15.287571  207964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:26:15.287615  207964 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20211231102602-6736 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20211231102602-6736]
	I1231 10:26:15.650360  207964 provision.go:172] copyRemoteCerts
	I1231 10:26:15.650426  207964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:26:15.650463  207964 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:26:15.694575  207964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49387 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:26:15.804959  207964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I1231 10:26:15.831425  207964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1231 10:26:15.855538  207964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:26:15.878095  207964 provision.go:86] duration metric: configureAuth took 652.35663ms
	I1231 10:26:15.878123  207964 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:26:15.878338  207964 config.go:176] Loaded profile config "old-k8s-version-20211231102602-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I1231 10:26:15.878353  207964 machine.go:91] provisioned docker machine in 1.072059648s
	I1231 10:26:15.878359  207964 client.go:171] LocalClient.Create took 12.838578414s
	I1231 10:26:15.878380  207964 start.go:168] duration metric: libmachine.API.Create for "old-k8s-version-20211231102602-6736" took 12.838652099s
	I1231 10:26:15.878389  207964 start.go:267] post-start starting for "old-k8s-version-20211231102602-6736" (driver="docker")
	I1231 10:26:15.878400  207964 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:26:15.878454  207964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:26:15.878500  207964 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:26:15.928215  207964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49387 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:26:16.032047  207964 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:26:16.036560  207964 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:26:16.036591  207964 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:26:16.036608  207964 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:26:16.036616  207964 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:26:16.036628  207964 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:26:16.036706  207964 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:26:16.036798  207964 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:26:16.036903  207964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:26:16.048931  207964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:26:16.070320  207964 start.go:270] post-start completed in 191.911004ms
	I1231 10:26:16.070733  207964 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20211231102602-6736
	I1231 10:26:16.126709  207964 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/config.json ...
	I1231 10:26:16.126998  207964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:26:16.127046  207964 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:26:16.177092  207964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49387 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:26:16.273989  207964 start.go:129] duration metric: createHost completed in 13.239151532s
	I1231 10:26:16.274043  207964 start.go:80] releasing machines lock for "old-k8s-version-20211231102602-6736", held for 13.239340156s
	I1231 10:26:16.274180  207964 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20211231102602-6736
	I1231 10:26:16.325195  207964 ssh_runner.go:195] Run: systemctl --version
	I1231 10:26:16.325258  207964 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:26:16.325294  207964 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:26:16.325379  207964 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:26:16.374309  207964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49387 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:26:16.376059  207964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49387 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:26:16.473493  207964 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:26:16.500111  207964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:26:16.512988  207964 docker.go:158] disabling docker service ...
	I1231 10:26:16.513115  207964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:26:16.533835  207964 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:26:16.547430  207964 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:26:16.650128  207964 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:26:16.738789  207964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:26:16.751126  207964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:26:16.771182  207964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1
fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10
KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9
kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:26:16.845409  207964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:26:16.853961  207964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:26:16.862455  207964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:26:16.965470  207964 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:26:17.046195  207964 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:26:17.046268  207964 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:26:17.050956  207964 start.go:458] Will wait 60s for crictl version
	I1231 10:26:17.051025  207964 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:26:17.090304  207964 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:26:17Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:26:28.138054  207964 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:26:28.170229  207964 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:26:28.170300  207964 ssh_runner.go:195] Run: containerd --version
	I1231 10:26:28.194116  207964 ssh_runner.go:195] Run: containerd --version
	I1231 10:26:28.220843  207964 out.go:176] * Preparing Kubernetes v1.16.0 on containerd 1.4.12 ...
	I1231 10:26:28.221044  207964 cli_runner.go:133] Run: docker network inspect old-k8s-version-20211231102602-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:26:28.262538  207964 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1231 10:26:28.266763  207964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:26:28.281832  207964 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:26:28.284334  207964 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:26:28.286984  207964 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:26:28.287070  207964 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1231 10:26:28.287139  207964 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:26:28.319083  207964 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:26:28.319115  207964 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:26:28.319160  207964 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:26:28.365444  207964 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:26:28.365472  207964 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:26:28.365516  207964 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:26:28.399694  207964 cni.go:93] Creating CNI manager for ""
	I1231 10:26:28.399731  207964 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:26:28.399744  207964 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:26:28.399759  207964 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20211231102602-6736 NodeName:old-k8s-version-20211231102602-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:26:28.399888  207964 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-20211231102602-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20211231102602-6736
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.49.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:26:28.399991  207964 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=old-k8s-version-20211231102602-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20211231102602-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1231 10:26:28.400048  207964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1231 10:26:28.410417  207964 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:26:28.410620  207964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:26:28.420676  207964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (642 bytes)
	I1231 10:26:28.440675  207964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1231 10:26:28.456521  207964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I1231 10:26:28.473164  207964 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:26:28.477982  207964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:26:28.490106  207964 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736 for IP: 192.168.49.2
	I1231 10:26:28.490242  207964 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:26:28.490296  207964 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:26:28.490353  207964 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.key
	I1231 10:26:28.490368  207964 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.crt with IP's: []
	I1231 10:26:28.841257  207964 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.crt ...
	I1231 10:26:28.841298  207964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.crt: {Name:mk52344fc42b3d5eaa90dc587ebb29faa8cdae3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:26:28.841531  207964 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.key ...
	I1231 10:26:28.841547  207964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.key: {Name:mk3a1953fa11e6278672e9dc97f180f049558be7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:26:28.841648  207964 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/apiserver.key.dd3b5fb2
	I1231 10:26:28.841672  207964 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1231 10:26:28.977169  207964 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/apiserver.crt.dd3b5fb2 ...
	I1231 10:26:28.977216  207964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/apiserver.crt.dd3b5fb2: {Name:mke83fb9bb2362f57108d11efb8eeac11be3650f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:26:28.977459  207964 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/apiserver.key.dd3b5fb2 ...
	I1231 10:26:28.977472  207964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/apiserver.key.dd3b5fb2: {Name:mk3d3c4e5a877b5aee424a1190d1b561004a427b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:26:28.977596  207964 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/apiserver.crt
	I1231 10:26:28.977679  207964 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/apiserver.key
	I1231 10:26:28.977746  207964 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/proxy-client.key
	I1231 10:26:28.977760  207964 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/proxy-client.crt with IP's: []
	I1231 10:26:29.042634  207964 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/proxy-client.crt ...
	I1231 10:26:29.042677  207964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/proxy-client.crt: {Name:mk17851c997ddb1d6c2b25a7a56e6aef39822a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:26:29.042893  207964 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/proxy-client.key ...
	I1231 10:26:29.042911  207964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/proxy-client.key: {Name:mk32bad286a1bd969428970c3a74f99d0eeb2312 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:26:29.043102  207964 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:26:29.043147  207964 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:26:29.043162  207964 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:26:29.043348  207964 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:26:29.043375  207964 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:26:29.043404  207964 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:26:29.043461  207964 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:26:29.044505  207964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:26:29.069569  207964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1231 10:26:29.094884  207964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:26:29.117835  207964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1231 10:26:29.140755  207964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:26:29.162613  207964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:26:29.188194  207964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:26:29.213442  207964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:26:29.237128  207964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:26:29.260565  207964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:26:29.284699  207964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:26:29.309763  207964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:26:29.331952  207964 ssh_runner.go:195] Run: openssl version
	I1231 10:26:29.338807  207964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:26:29.349047  207964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:26:29.353029  207964 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:26:29.353111  207964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:26:29.359037  207964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:26:29.369058  207964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:26:29.378512  207964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:26:29.382616  207964 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:26:29.382692  207964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:26:29.389453  207964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:26:29.398770  207964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:26:29.408363  207964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:26:29.412582  207964 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:26:29.412654  207964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:26:29.426775  207964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:26:29.436163  207964 kubeadm.go:388] StartCluster: {Name:old-k8s-version-20211231102602-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20211231102602-6736 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:26:29.436314  207964 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:26:29.436373  207964 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:26:29.467922  207964 cri.go:87] found id: ""
	I1231 10:26:29.467987  207964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:26:29.476243  207964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:26:29.484816  207964 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:26:29.484891  207964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:26:29.492936  207964 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:26:29.492995  207964 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:26:30.055028  207964 out.go:203]   - Generating certificates and keys ...
	I1231 10:26:32.469285  207964 out.go:203]   - Booting up control plane ...
	I1231 10:26:42.031977  207964 out.go:203]   - Configuring RBAC rules ...
	I1231 10:26:42.459495  207964 cni.go:93] Creating CNI manager for ""
	I1231 10:26:42.459532  207964 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:26:42.462363  207964 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:26:42.462447  207964 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:26:42.466505  207964 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I1231 10:26:42.466538  207964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:26:42.481832  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:26:42.903688  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=old-k8s-version-20211231102602-6736 minikube.k8s.io/updated_at=2021_12_31T10_26_42_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:42.903742  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:42.903710  207964 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:26:43.032807  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:43.032830  207964 ops.go:34] apiserver oom_adj: -16
	I1231 10:26:43.650405  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:44.149933  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:44.649826  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:45.150367  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:45.650327  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:46.150723  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:46.649751  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:47.149770  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:47.650382  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:48.149756  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:48.650120  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:49.150390  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:49.650794  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:50.150052  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:50.650402  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:51.150651  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:51.650115  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:52.149841  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:52.650709  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:53.150448  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:53.649730  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:54.149867  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:54.649738  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:55.149915  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:55.649857  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:56.149861  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:56.650715  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:57.150485  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:57.650803  207964 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:26:57.795574  207964 kubeadm.go:864] duration metric: took 14.891962492s to wait for elevateKubeSystemPrivileges.
	I1231 10:26:57.795613  207964 kubeadm.go:390] StartCluster complete in 28.359464015s
	I1231 10:26:57.795634  207964 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:26:57.795744  207964 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:26:57.798455  207964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:26:58.389072  207964 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20211231102602-6736" rescaled to 1
	I1231 10:26:58.389207  207964 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ControlPlane:true Worker:true}
	I1231 10:26:58.391800  207964 out.go:176] * Verifying Kubernetes components...
	I1231 10:26:58.389282  207964 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I1231 10:26:58.391878  207964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:26:58.391928  207964 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20211231102602-6736"
	I1231 10:26:58.389287  207964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:26:58.389661  207964 config.go:176] Loaded profile config "old-k8s-version-20211231102602-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I1231 10:26:58.391972  207964 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20211231102602-6736"
	I1231 10:26:58.392004  207964 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20211231102602-6736"
	I1231 10:26:58.392032  207964 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20211231102602-6736"
	W1231 10:26:58.392054  207964 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:26:58.392084  207964 host.go:66] Checking if "old-k8s-version-20211231102602-6736" exists ...
	I1231 10:26:58.392465  207964 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:26:58.392764  207964 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:26:58.465503  207964 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:26:58.465710  207964 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:26:58.465739  207964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:26:58.465802  207964 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:26:58.485313  207964 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20211231102602-6736"
	W1231 10:26:58.485348  207964 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:26:58.485380  207964 host.go:66] Checking if "old-k8s-version-20211231102602-6736" exists ...
	I1231 10:26:58.485947  207964 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:26:58.514914  207964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49387 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:26:58.538442  207964 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:26:58.538470  207964 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:26:58.538522  207964 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:26:58.575666  207964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49387 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:26:58.607307  207964 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:26:58.609863  207964 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20211231102602-6736" to be "Ready" ...
	I1231 10:26:58.800989  207964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:26:58.801491  207964 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:26:59.299845  207964 start.go:773] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I1231 10:26:59.498393  207964 out.go:176] * Enabled addons: default-storageclass, storage-provisioner
	I1231 10:26:59.498434  207964 addons.go:417] enableAddons completed in 1.109161758s
	I1231 10:27:00.617497  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:02.618309  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:05.117568  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:07.117715  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:09.617718  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:12.117910  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:14.617676  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:17.117003  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:19.617652  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:22.116883  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:24.116999  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:26.117690  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:28.617323  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:31.117621  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:33.118065  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:35.617010  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:37.617686  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:40.116947  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:42.117528  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:44.118069  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:46.617464  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:49.116913  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:51.117207  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:53.617017  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:55.618052  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:27:58.117614  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:00.617710  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:03.117742  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:05.618015  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:08.117441  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:10.118043  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:12.118111  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:14.617514  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:17.117750  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:19.617646  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:21.617755  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:24.117818  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:26.617124  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:28.617550  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:31.116964  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:33.117601  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:35.617720  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:38.118119  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:40.617737  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:43.117857  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:45.117897  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:47.617207  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:49.617550  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:52.117088  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:54.117926  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:56.617128  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:28:58.618085  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:01.117456  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:03.117671  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:05.617696  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:08.117463  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:10.617968  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:13.118035  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:15.617985  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:18.117897  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:20.617661  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:23.117130  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:25.117792  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:27.618729  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:29.619469  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:32.117758  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:34.617173  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:36.621269  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:39.119480  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:41.363527  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:43.618321  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:45.618846  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:48.117691  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:50.118545  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:52.119192  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:54.617257  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:56.810366  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:59.120692  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:01.617556  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:03.618908  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:05.621392  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:08.117539  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:10.118547  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:12.617324  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:14.617696  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:16.617970  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:18.618038  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:20.618145  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:23.118084  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:25.120431  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:27.617851  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:29.617978  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:32.117705  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:34.617641  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:37.118058  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:39.617030  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:41.618652  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:44.118016  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:46.618570  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:49.118277  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:51.618149  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:54.117720  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:56.618943  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:58.620822  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:58.620862  207964 node_ready.go:38] duration metric: took 4m0.010971187s waiting for node "old-k8s-version-20211231102602-6736" to be "Ready" ...
	I1231 10:30:58.623527  207964 out.go:176] 
	W1231 10:30:58.623707  207964 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:30:58.623722  207964 out.go:241] * 
	* 
	W1231 10:30:58.624497  207964 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1231 10:30:58.626502  207964 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-20211231102602-6736 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20211231102602-6736
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20211231102602-6736:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736",
	        "Created": "2021-12-31T10:26:13.51267746Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 209194,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-12-31T10:26:13.982747787Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/hostname",
	        "HostsPath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/hosts",
	        "LogPath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736-json.log",
	        "Name": "/old-k8s-version-20211231102602-6736",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20211231102602-6736:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20211231102602-6736",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd-init/diff:/var/lib/docker/overlay2/c534a67c1f4c5b9f1d9e8fda7874029a0b48633454e02a0b6c810aa9cf70f36b/diff:/var/lib/docker/overlay2/de7a9fc119b7ade7c9f3b24bea5108d56a2268af67d230db0cccbbf8190cc502/diff:/var/lib/docker/overlay2/7dde3b10db81d42c3216201bd8676123e0e230bfa173b31e053af8fa206a800e/diff:/var/lib/docker/overlay2/97454ec819da74666dcc72f8e5b4653886c44f3699cecf13a018ecf95d935273/diff:/var/lib/docker/overlay2/f63f64cf22a9591cbcafeb284a265882ea890544f2b083b747047bcf47a45f11/diff:/var/lib/docker/overlay2/85a2123491b64d14f7493b3bc6a519d9df28cc3680976b9e5a7ecf783be5a304/diff:/var/lib/docker/overlay2/2ec15009b968007c3be12325edca4c12821b52fc616ace66d456038c7f9b8b26/diff:/var/lib/docker/overlay2/c45e9285ae7064da0be5f07d29b035ad54f64e529db00ffe6f2dce6ad543dccf/diff:/var/lib/docker/overlay2/d95000893ca52351a7804b110df43c1656b4bc7fbe4940c9e0ff2744233b1f93/diff:/var/lib/docker/overlay2/38e3bf
07bc49f88e1e7214644d31095a6178f5156c6a5ff3fbcaf8676336a52d/diff:/var/lib/docker/overlay2/fe45a9a5036ea98542c5fd7808e66f8b6ffe9d0c545de700f7cbe39d6280e0e3/diff:/var/lib/docker/overlay2/8cd90b043b30c66cbeacbe4ca21881d85717c6e16f1fe627e2eecdeaf461da6c/diff:/var/lib/docker/overlay2/7221b66a068ae65d9a1c089fe5c60e7230aa950b3d47eb7e90a1776c2a45931b/diff:/var/lib/docker/overlay2/86ce10ecfb90cfd8e1f7d722715353b823ddfa58b93eefa2c1efffff899916ee/diff:/var/lib/docker/overlay2/12ea1316fade88c6e1d2272d6dbeb629875ff96f4dd6f6ead0ff6a4e7e3a3067/diff:/var/lib/docker/overlay2/0fe276873166ac11302901f1bb2c547c71e23fac2197e717e789b6ce7d1eb3ce/diff:/var/lib/docker/overlay2/81bd863acb94ff576a1f1f046ee87693930a12209e915a09f2a720c77d1357f3/diff:/var/lib/docker/overlay2/8875d87537d51e137e2fbff41fe61dadeefa46057e0c93d257ec4754b5ba95f3/diff:/var/lib/docker/overlay2/86e75ae4c708ad2e78d21326ca3b7f552b7903d8c7a0a66e086feaacd3a8c002/diff:/var/lib/docker/overlay2/c688ba4459332409ab260296b96d566d186d307b81308fb3b59724fe6fed3c7d/diff:/var/lib/d
ocker/overlay2/45c4b946a887c9de4f592591b3e65041083baba75d5bed9b0995339de1cafd02/diff:/var/lib/docker/overlay2/3927af3e4dc5e4af94ac76cbe5a3b9759b80e10646aabc655f5c9b56ae13e650/diff:/var/lib/docker/overlay2/11dd3efafe062edcbd4dd837545d570514492786b08abf77f57fe300d225eaa5/diff:/var/lib/docker/overlay2/b8e0f4d5369eb0c633092d9e21501d99ccb27fafaad6667209e1d988c337ae9b/diff:/var/lib/docker/overlay2/c605476b22b48b266163ca6ee5fcb52b15458f1d7a77458390109b6e795f72f3/diff:/var/lib/docker/overlay2/19b31873fbc8858f6b143ea8dc98572f32110c70c29732285bdea89dd6163287/diff:/var/lib/docker/overlay2/171afcaaccc199dc3a653d1bf42453959e7c6bd7bc1e8340f94454b5f5fb6d2d/diff:/var/lib/docker/overlay2/2000b0fa8d671b6e34ec28228440eb7e6a5ddcd6b2cb657836d8522e3fb4454e/diff:/var/lib/docker/overlay2/1294bf56384681b45b611fe07d7b32c6c87a67eee568452d1f8123bf319f7a4c/diff:/var/lib/docker/overlay2/3199bb762e90968c1998250099f781c0e8954f4d62df39a80a384b9208149421/diff:/var/lib/docker/overlay2/036a54cd701c84ee783285b30f6cf33b92858f1a11ebd5db15cffffea12
adcd9/diff:/var/lib/docker/overlay2/552a166f9d1f0d79a7c45c08db35c6db9366e5d8e1321b62ffc3d8a9ea458797/diff:/var/lib/docker/overlay2/98f34e7af675026bd6dd9aaf43b996b5a992e04fe33f238d16d38e7ccad2fb91/diff:/var/lib/docker/overlay2/c1ed97ca4514bd37713c82daa6ba805285f12b5aa51599e9f915b8755fbf55a7/diff:/var/lib/docker/overlay2/86b8ba00550442fc509be298ff07205e99a171d56a70a2e1735cdfd4a9ab745e/diff:/var/lib/docker/overlay2/109092c1424dfee38a4eb63e39a9bb5d3de4b011f84e30bd021bf73ee2f47aa1/diff:/var/lib/docker/overlay2/87fd60ac9e5dd5a701def2c0a9e1c64463bd4779be06a990c86e5561a4349312/diff:/var/lib/docker/overlay2/145545da7fd578da8a41adec4cec3a9b347b4e609c900ee9365ae7735c7859c6/diff:/var/lib/docker/overlay2/e6a20d33de71269404b7fdc1fa6f915dcbc5a337e0625ef4536cb942bb55c19b/diff:/var/lib/docker/overlay2/38829872ccac03d0dae45b93d1f34e61fd179b6fc19f22fd27d4431ee773090f/diff:/var/lib/docker/overlay2/3c7fa7cc3994b36d6b984ebe7c46b535829f02814130486c4ecf5a0976f4f7dc/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20211231102602-6736",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20211231102602-6736/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20211231102602-6736",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20211231102602-6736",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20211231102602-6736",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c8fb65b850e95d9291586c192e53e52d1c3afd0fdfabe6699d1eda53b3ac8da7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49387"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49386"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49383"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49385"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49384"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c8fb65b850e9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20211231102602-6736": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5984218b7d48",
	                        "old-k8s-version-20211231102602-6736"
	                    ],
	                    "NetworkID": "689da033f191c821bd60ad0334b0149b7450bc9a9e69f2e467eaea0327517488",
	                    "EndpointID": "a8e57908631bc9f44ae1933829bac6c4d6b691fb5425dc0680fa172c99243c35",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20211231102602-6736 -n old-k8s-version-20211231102602-6736
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20211231102602-6736 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-20211231102602-6736 logs -n 25: (1.04978738s)
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                       |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| profile | list --output json                               | minikube                               | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:20:06 UTC | Fri, 31 Dec 2021 10:20:07 UTC |
	| delete  | -p pause-20211231101829-6736                     | pause-20211231101829-6736              | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:20:07 UTC | Fri, 31 Dec 2021 10:20:08 UTC |
	| start   | -p auto-20211231101406-6736                      | auto-20211231101406-6736               | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:18:58 UTC | Fri, 31 Dec 2021 10:20:21 UTC |
	|         | --memory=2048                                    |                                        |         |         |                               |                               |
	|         | --alsologtostderr                                |                                        |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                    |                                        |         |         |                               |                               |
	|         | --driver=docker                                  |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd                   |                                        |         |         |                               |                               |
	| ssh     | -p auto-20211231101406-6736                      | auto-20211231101406-6736               | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:20:21 UTC | Fri, 31 Dec 2021 10:20:21 UTC |
	|         | pgrep -a kubelet                                 |                                        |         |         |                               |                               |
	| start   | -p                                               | running-upgrade-20211231101758-6736    | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:18:56 UTC | Fri, 31 Dec 2021 10:20:34 UTC |
	|         | running-upgrade-20211231101758-6736              |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                  |                                        |         |         |                               |                               |
	|         | -v=1 --driver=docker                             |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd                   |                                        |         |         |                               |                               |
	| start   | -p                                               | custom-weave-20211231101408-6736       | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:19:12 UTC | Fri, 31 Dec 2021 10:20:35 UTC |
	|         | custom-weave-20211231101408-6736                 |                                        |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr                  |                                        |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                    |                                        |         |         |                               |                               |
	|         | --cni=testdata/weavenet.yaml                     |                                        |         |         |                               |                               |
	|         | --driver=docker                                  |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd                   |                                        |         |         |                               |                               |
	| ssh     | -p                                               | custom-weave-20211231101408-6736       | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:20:35 UTC | Fri, 31 Dec 2021 10:20:36 UTC |
	|         | custom-weave-20211231101408-6736                 |                                        |         |         |                               |                               |
	|         | pgrep -a kubelet                                 |                                        |         |         |                               |                               |
	| delete  | -p auto-20211231101406-6736                      | auto-20211231101406-6736               | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:20:33 UTC | Fri, 31 Dec 2021 10:20:37 UTC |
	| delete  | -p                                               | running-upgrade-20211231101758-6736    | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:20:34 UTC | Fri, 31 Dec 2021 10:20:37 UTC |
	|         | running-upgrade-20211231101758-6736              |                                        |         |         |                               |                               |
	| delete  | -p                                               | custom-weave-20211231101408-6736       | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:20:46 UTC | Fri, 31 Dec 2021 10:20:58 UTC |
	|         | custom-weave-20211231101408-6736                 |                                        |         |         |                               |                               |
	| start   | -p cilium-20211231101408-6736                    | cilium-20211231101408-6736             | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:20:08 UTC | Fri, 31 Dec 2021 10:21:59 UTC |
	|         | --memory=2048                                    |                                        |         |         |                               |                               |
	|         | --alsologtostderr                                |                                        |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                    |                                        |         |         |                               |                               |
	|         | --cni=cilium --driver=docker                     |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd                   |                                        |         |         |                               |                               |
	| ssh     | -p cilium-20211231101408-6736                    | cilium-20211231101408-6736             | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:22:04 UTC | Fri, 31 Dec 2021 10:22:04 UTC |
	|         | pgrep -a kubelet                                 |                                        |         |         |                               |                               |
	| delete  | -p cilium-20211231101408-6736                    | cilium-20211231101408-6736             | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:22:15 UTC | Fri, 31 Dec 2021 10:22:18 UTC |
	| start   | -p bridge-20211231101406-6736                    | bridge-20211231101406-6736             | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:22:18 UTC | Fri, 31 Dec 2021 10:23:28 UTC |
	|         | --memory=2048                                    |                                        |         |         |                               |                               |
	|         | --alsologtostderr                                |                                        |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                    |                                        |         |         |                               |                               |
	|         | --cni=bridge --driver=docker                     |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd                   |                                        |         |         |                               |                               |
	| ssh     | -p bridge-20211231101406-6736                    | bridge-20211231101406-6736             | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:23:28 UTC | Fri, 31 Dec 2021 10:23:29 UTC |
	|         | pgrep -a kubelet                                 |                                        |         |         |                               |                               |
	| -p      | kindnet-20211231101407-6736                      | kindnet-20211231101407-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:25:57 UTC | Fri, 31 Dec 2021 10:25:58 UTC |
	|         | logs -n 25                                       |                                        |         |         |                               |                               |
	| delete  | -p kindnet-20211231101407-6736                   | kindnet-20211231101407-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:25:59 UTC | Fri, 31 Dec 2021 10:26:02 UTC |
	| start   | -p                                               | enable-default-cni-20211231101406-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:20:37 UTC | Fri, 31 Dec 2021 10:26:08 UTC |
	|         | enable-default-cni-20211231101406-6736           |                                        |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr                  |                                        |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                    |                                        |         |         |                               |                               |
	|         | --enable-default-cni=true                        |                                        |         |         |                               |                               |
	|         | --driver=docker                                  |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd                   |                                        |         |         |                               |                               |
	| ssh     | -p                                               | enable-default-cni-20211231101406-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:26:08 UTC | Fri, 31 Dec 2021 10:26:09 UTC |
	|         | enable-default-cni-20211231101406-6736           |                                        |         |         |                               |                               |
	|         | pgrep -a kubelet                                 |                                        |         |         |                               |                               |
	| -p      | bridge-20211231101406-6736                       | bridge-20211231101406-6736             | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:29:23 UTC | Fri, 31 Dec 2021 10:29:24 UTC |
	|         | logs -n 25                                       |                                        |         |         |                               |                               |
	| delete  | -p bridge-20211231101406-6736                    | bridge-20211231101406-6736             | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:29:25 UTC | Fri, 31 Dec 2021 10:29:28 UTC |
	| -p      | calico-20211231101408-6736                       | calico-20211231101408-6736             | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:29:46 UTC | Fri, 31 Dec 2021 10:29:47 UTC |
	|         | logs -n 25                                       |                                        |         |         |                               |                               |
	| delete  | -p calico-20211231101408-6736                    | calico-20211231101408-6736             | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:29:48 UTC | Fri, 31 Dec 2021 10:29:53 UTC |
	| start   | -p no-preload-20211231102928-6736                | no-preload-20211231102928-6736         | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:29:28 UTC | Fri, 31 Dec 2021 10:30:43 UTC |
	|         | --memory=2200 --alsologtostderr                  |                                        |         |         |                               |                               |
	|         | --wait=true --preload=false                      |                                        |         |         |                               |                               |
	|         | --driver=docker                                  |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd                   |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                |                                        |         |         |                               |                               |
	| addons  | enable metrics-server -p                         | no-preload-20211231102928-6736         | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:30:52 UTC | Fri, 31 Dec 2021 10:30:52 UTC |
	|         | no-preload-20211231102928-6736                   |                                        |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4 |                                        |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain           |                                        |         |         |                               |                               |
	|---------|--------------------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/31 10:29:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.17.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1231 10:29:53.376058  219726 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:29:53.376198  219726 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:29:53.376208  219726 out.go:310] Setting ErrFile to fd 2...
	I1231 10:29:53.376212  219726 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:29:53.376426  219726 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:29:53.376752  219726 out.go:304] Setting JSON to false
	I1231 10:29:53.378282  219726 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4348,"bootTime":1640942245,"procs":488,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:29:53.378376  219726 start.go:122] virtualization: kvm guest
	I1231 10:29:53.383065  219726 out.go:176] * [embed-certs-20211231102953-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:29:53.383303  219726 notify.go:174] Checking for updates...
	I1231 10:29:53.386360  219726 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:29:53.389529  219726 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:29:53.392775  219726 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:29:53.396943  219726 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:29:48.984789  216628 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/dashboard_v2.3.1: (3.532634371s)
	I1231 10:29:48.984828  216628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 from cache
	I1231 10:29:48.984855  216628 containerd.go:292] Loading image: /var/lib/minikube/images/etcd_3.5.1-0
	I1231 10:29:48.984892  216628 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.1-0
	I1231 10:29:53.400842  219726 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:29:53.402334  219726 config.go:176] Loaded profile config "enable-default-cni-20211231101406-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:29:53.402563  219726 config.go:176] Loaded profile config "no-preload-20211231102928-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2-rc.0
	I1231 10:29:53.402745  219726 config.go:176] Loaded profile config "old-k8s-version-20211231102602-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I1231 10:29:53.402813  219726 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:29:53.467450  219726 docker.go:132] docker version: linux-20.10.12
	I1231 10:29:53.467589  219726 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:29:53.599848  219726 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:63 SystemTime:2021-12-31 10:29:53.51136652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:29:53.599963  219726 docker.go:237] overlay module found
	I1231 10:29:53.603460  219726 out.go:176] * Using the docker driver based on user configuration
	I1231 10:29:53.603503  219726 start.go:280] selected driver: docker
	I1231 10:29:53.603511  219726 start.go:795] validating driver "docker" against <nil>
	I1231 10:29:53.603539  219726 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:29:53.603568  219726 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:29:53.603576  219726 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:29:53.603623  219726 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:29:53.603650  219726 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:29:53.606759  219726 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:29:53.607628  219726 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:29:53.741338  219726 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:63 SystemTime:2021-12-31 10:29:53.658332742 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:29:53.741508  219726 start_flags.go:284] no existing cluster config was found, will generate one from the flags 
	I1231 10:29:53.741739  219726 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1231 10:29:53.741778  219726 cni.go:93] Creating CNI manager for ""
	I1231 10:29:53.741791  219726 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:29:53.741811  219726 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1231 10:29:53.741822  219726 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1231 10:29:53.741831  219726 start_flags.go:293] Found "CNI" CNI - setting NetworkPlugin=cni
	I1231 10:29:53.741850  219726 start_flags.go:298] config:
	{Name:embed-certs-20211231102953-6736 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:29:53.745661  219726 out.go:176] * Starting control plane node embed-certs-20211231102953-6736 in cluster embed-certs-20211231102953-6736
	I1231 10:29:53.745720  219726 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:29:53.748154  219726 out.go:176] * Pulling base image ...
	I1231 10:29:53.748298  219726 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:29:53.748368  219726 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4
	I1231 10:29:53.748392  219726 cache.go:57] Caching tarball of preloaded images
	I1231 10:29:53.748460  219726 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:29:53.748799  219726 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:29:53.748829  219726 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.1 on containerd
	I1231 10:29:53.749003  219726 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/config.json ...
	I1231 10:29:53.749040  219726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/config.json: {Name:mk731142676845b023f187e596c264da3350559e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:29:53.796111  219726 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:29:53.796329  219726 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:29:53.796373  219726 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:29:53.796414  219726 start.go:313] acquiring machines lock for embed-certs-20211231102953-6736: {Name:mk30ade561e73ed15bb546a531be6f54b6b9c072 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:29:53.796785  219726 start.go:317] acquired machines lock for "embed-certs-20211231102953-6736" in 347.78µs
	I1231 10:29:53.796832  219726 start.go:89] Provisioning new machine with config: &{Name:embed-certs-20211231102953-6736 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker} &{Name: IP: Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:29:53.797021  219726 start.go:126] createHost starting for "" (driver="docker")
	I1231 10:29:54.617257  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:56.810366  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:29:53.802174  219726 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1231 10:29:53.802477  219726 start.go:160] libmachine.API.Create for "embed-certs-20211231102953-6736" (driver="docker")
	I1231 10:29:53.802527  219726 client.go:168] LocalClient.Create starting
	I1231 10:29:53.802626  219726 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem
	I1231 10:29:53.802661  219726 main.go:130] libmachine: Decoding PEM data...
	I1231 10:29:53.802679  219726 main.go:130] libmachine: Parsing certificate...
	I1231 10:29:53.802744  219726 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem
	I1231 10:29:53.802775  219726 main.go:130] libmachine: Decoding PEM data...
	I1231 10:29:53.802796  219726 main.go:130] libmachine: Parsing certificate...
	I1231 10:29:53.803411  219726 cli_runner.go:133] Run: docker network inspect embed-certs-20211231102953-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1231 10:29:53.852706  219726 cli_runner.go:180] docker network inspect embed-certs-20211231102953-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1231 10:29:53.852901  219726 network_create.go:254] running [docker network inspect embed-certs-20211231102953-6736] to gather additional debugging logs...
	I1231 10:29:53.852944  219726 cli_runner.go:133] Run: docker network inspect embed-certs-20211231102953-6736
	W1231 10:29:53.899784  219726 cli_runner.go:180] docker network inspect embed-certs-20211231102953-6736 returned with exit code 1
	I1231 10:29:53.899862  219726 network_create.go:257] error running [docker network inspect embed-certs-20211231102953-6736]: docker network inspect embed-certs-20211231102953-6736: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20211231102953-6736
	I1231 10:29:53.899886  219726 network_create.go:259] output of [docker network inspect embed-certs-20211231102953-6736]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20211231102953-6736
	
	** /stderr **
	I1231 10:29:53.899978  219726 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:29:53.957771  219726 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-689da033f191 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:51:2a:98:ff}}
	I1231 10:29:53.958723  219726 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc00049c208] misses:0}
	I1231 10:29:53.958763  219726 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1231 10:29:53.958781  219726 network_create.go:106] attempt to create docker network embed-certs-20211231102953-6736 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1231 10:29:53.958834  219726 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211231102953-6736
	I1231 10:29:54.068631  219726 network_create.go:90] docker network embed-certs-20211231102953-6736 192.168.58.0/24 created
	I1231 10:29:54.068682  219726 kic.go:106] calculated static IP "192.168.58.2" for the "embed-certs-20211231102953-6736" container
	I1231 10:29:54.068742  219726 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I1231 10:29:54.122134  219726 cli_runner.go:133] Run: docker volume create embed-certs-20211231102953-6736 --label name.minikube.sigs.k8s.io=embed-certs-20211231102953-6736 --label created_by.minikube.sigs.k8s.io=true
	I1231 10:29:54.167960  219726 oci.go:102] Successfully created a docker volume embed-certs-20211231102953-6736
	I1231 10:29:54.168064  219726 cli_runner.go:133] Run: docker run --rm --name embed-certs-20211231102953-6736-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20211231102953-6736 --entrypoint /usr/bin/test -v embed-certs-20211231102953-6736:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib
	I1231 10:29:56.838780  219726 cli_runner.go:186] Completed: docker run --rm --name embed-certs-20211231102953-6736-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20211231102953-6736 --entrypoint /usr/bin/test -v embed-certs-20211231102953-6736:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib: (2.670669448s)
	I1231 10:29:56.838813  219726 oci.go:106] Successfully prepared a docker volume embed-certs-20211231102953-6736
	I1231 10:29:56.838860  219726 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:29:56.838891  219726 kic.go:179] Starting extracting preloaded images to volume ...
	I1231 10:29:56.838956  219726 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20211231102953-6736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir
	I1231 10:29:55.415262  216628 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.1-0: (6.430309218s)
	I1231 10:29:55.415301  216628 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/images/k8s.gcr.io/etcd_3.5.1-0 from cache
	I1231 10:29:55.415337  216628 cache_images.go:123] Successfully loaded all cached images
	I1231 10:29:55.415343  216628 cache_images.go:92] LoadImages completed in 21.748525824s
	I1231 10:29:55.415399  216628 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:29:55.453446  216628 cni.go:93] Creating CNI manager for ""
	I1231 10:29:55.453482  216628 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:29:55.453497  216628 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:29:55.453526  216628 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.2-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20211231102928-6736 NodeName:no-preload-20211231102928-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs Clie
ntCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:29:55.453695  216628 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20211231102928-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.2-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:29:55.453823  216628 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.2-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=no-preload-20211231102928-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.2-rc.0 ClusterName:no-preload-20211231102928-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1231 10:29:55.453892  216628 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2-rc.0
	I1231 10:29:55.463755  216628 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.23.2-rc.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.23.2-rc.0': No such file or directory
	
	Initiating transfer...
	I1231 10:29:55.463823  216628 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.23.2-rc.0
	I1231 10:29:55.473801  216628 binary.go:67] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.23.2-rc.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.2-rc.0/bin/linux/amd64/kubectl.sha256
	I1231 10:29:55.473993  216628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl
	I1231 10:29:55.474074  216628 download.go:100] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.2-rc.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.2-rc.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/linux/v1.23.2-rc.0/kubelet
	I1231 10:29:55.474173  216628 download.go:100] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.2-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.2-rc.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/linux/v1.23.2-rc.0/kubeadm
	I1231 10:29:55.480469  216628 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.23.2-rc.0/kubectl': No such file or directory
	I1231 10:29:55.480506  216628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/linux/v1.23.2-rc.0/kubectl --> /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl (46587904 bytes)
	I1231 10:29:56.058248  216628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.2-rc.0/kubeadm
	I1231 10:29:56.062435  216628 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.23.2-rc.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.2-rc.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.23.2-rc.0/kubeadm': No such file or directory
	I1231 10:29:56.062493  216628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/linux/v1.23.2-rc.0/kubeadm --> /var/lib/minikube/binaries/v1.23.2-rc.0/kubeadm (45211648 bytes)
	I1231 10:29:56.625041  216628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:29:56.637837  216628 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.2-rc.0/kubelet
	I1231 10:29:56.642425  216628 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.23.2-rc.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.23.2-rc.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.23.2-rc.0/kubelet': No such file or directory
	I1231 10:29:56.642466  216628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/linux/v1.23.2-rc.0/kubelet --> /var/lib/minikube/binaries/v1.23.2-rc.0/kubelet (124513248 bytes)
	I1231 10:29:56.995000  216628 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:29:57.005497  216628 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (642 bytes)
	I1231 10:29:57.025390  216628 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1231 10:29:57.042749  216628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2085 bytes)
	I1231 10:29:57.062166  216628 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:29:57.067278  216628 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:29:57.120905  216628 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736 for IP: 192.168.67.2
	I1231 10:29:57.121069  216628 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:29:57.121123  216628 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:29:57.121186  216628 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.key
	I1231 10:29:57.121205  216628 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt with IP's: []
	I1231 10:29:57.219837  216628 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt ...
	I1231 10:29:57.219874  216628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: {Name:mkfb0025c986bfd54d08cd9066049b5455b3e5b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:29:57.220112  216628 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.key ...
	I1231 10:29:57.220124  216628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.key: {Name:mk8a70bbfc4fb065bc8a0e127f12b9c16d4382a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:29:57.220272  216628 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/apiserver.key.c7fa3a9e
	I1231 10:29:57.220294  216628 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1231 10:29:57.322330  216628 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/apiserver.crt.c7fa3a9e ...
	I1231 10:29:57.322365  216628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/apiserver.crt.c7fa3a9e: {Name:mka7f9b50b828d81746f9f058bbf38058b2c01d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:29:57.322561  216628 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/apiserver.key.c7fa3a9e ...
	I1231 10:29:57.322581  216628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/apiserver.key.c7fa3a9e: {Name:mkb5602d04733259494a63169662caab21796ed5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:29:57.322687  216628 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/apiserver.crt
	I1231 10:29:57.322769  216628 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/apiserver.key
	I1231 10:29:57.322818  216628 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/proxy-client.key
	I1231 10:29:57.322835  216628 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/proxy-client.crt with IP's: []
	I1231 10:29:57.477827  216628 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/proxy-client.crt ...
	I1231 10:29:57.477864  216628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/proxy-client.crt: {Name:mk2bd59de2a7303f238ddeac6ffafd8bb4e5bdfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:29:57.478067  216628 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/proxy-client.key ...
	I1231 10:29:57.478076  216628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/proxy-client.key: {Name:mk67d5008a0401d47cdb4d1e2797f7ce0139c3f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:29:57.478251  216628 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:29:57.478281  216628 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:29:57.478290  216628 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:29:57.478310  216628 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:29:57.478331  216628 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:29:57.478350  216628 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:29:57.478420  216628 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:29:57.479335  216628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:29:57.506367  216628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1231 10:29:57.530925  216628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:29:57.586601  216628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1231 10:29:57.610682  216628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:29:57.633559  216628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:29:57.654800  216628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:29:57.676712  216628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:29:57.699031  216628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:29:57.723153  216628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:29:57.748095  216628 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:29:57.770515  216628 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:29:57.787234  216628 ssh_runner.go:195] Run: openssl version
	I1231 10:29:57.795581  216628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:29:57.807093  216628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:29:57.811674  216628 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:29:57.811738  216628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:29:57.820158  216628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:29:57.829804  216628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:29:57.841063  216628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:29:57.845665  216628 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:29:57.845742  216628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:29:57.852829  216628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:29:57.862957  216628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:29:57.872052  216628 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:29:57.875904  216628 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:29:57.875972  216628 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:29:57.881965  216628 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:29:57.892384  216628 kubeadm.go:388] StartCluster: {Name:no-preload-20211231102928-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2-rc.0 ClusterName:no-preload-20211231102928-6736 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.2-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:29:57.892517  216628 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:29:57.892573  216628 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:29:57.922974  216628 cri.go:87] found id: ""
	I1231 10:29:57.923098  216628 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:29:57.932501  216628 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:29:57.942158  216628 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:29:57.942235  216628 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:29:57.950757  216628 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:29:57.950839  216628 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:29:58.317825  216628 out.go:203]   - Generating certificates and keys ...
	I1231 10:29:59.120692  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:01.617556  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:02.022400  216628 out.go:203]   - Booting up control plane ...
	I1231 10:30:03.618908  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:05.621392  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:07.107722  219726 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20211231102953-6736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir: (10.268722151s)
	I1231 10:30:07.107761  219726 kic.go:188] duration metric: took 10.268868 seconds to extract preloaded images to volume
	W1231 10:30:07.107821  219726 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1231 10:30:07.107835  219726 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I1231 10:30:07.107917  219726 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1231 10:30:07.215154  219726 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-20211231102953-6736 --name embed-certs-20211231102953-6736 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20211231102953-6736 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-20211231102953-6736 --network embed-certs-20211231102953-6736 --ip 192.168.58.2 --volume embed-certs-20211231102953-6736:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b
	I1231 10:30:07.698249  219726 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Running}}
	I1231 10:30:07.747031  219726 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:30:07.800842  219726 cli_runner.go:133] Run: docker exec embed-certs-20211231102953-6736 stat /var/lib/dpkg/alternatives/iptables
	I1231 10:30:07.895797  219726 oci.go:175] the created container "embed-certs-20211231102953-6736" has a running status.
	I1231 10:30:07.895839  219726 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa...
	I1231 10:30:08.260906  219726 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1231 10:30:08.117539  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:10.118547  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:08.354826  219726 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:30:08.396121  219726 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1231 10:30:08.396141  219726 kic_runner.go:114] Args: [docker exec --privileged embed-certs-20211231102953-6736 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1231 10:30:08.501123  219726 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:30:08.547863  219726 machine.go:88] provisioning docker machine ...
	I1231 10:30:08.547908  219726 ubuntu.go:169] provisioning hostname "embed-certs-20211231102953-6736"
	I1231 10:30:08.547959  219726 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:30:08.591246  219726 main.go:130] libmachine: Using SSH client type: native
	I1231 10:30:08.591522  219726 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49397 <nil> <nil>}
	I1231 10:30:08.591549  219726 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20211231102953-6736 && echo "embed-certs-20211231102953-6736" | sudo tee /etc/hostname
	I1231 10:30:08.753114  219726 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20211231102953-6736
	
	I1231 10:30:08.753238  219726 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:30:08.809044  219726 main.go:130] libmachine: Using SSH client type: native
	I1231 10:30:08.809232  219726 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49397 <nil> <nil>}
	I1231 10:30:08.809259  219726 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20211231102953-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20211231102953-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20211231102953-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:30:08.957088  219726 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:30:08.957117  219726 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:30:08.957137  219726 ubuntu.go:177] setting up certificates
	I1231 10:30:08.957152  219726 provision.go:83] configureAuth start
	I1231 10:30:08.957216  219726 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211231102953-6736
	I1231 10:30:09.010233  219726 provision.go:138] copyHostCerts
	I1231 10:30:09.010303  219726 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:30:09.010315  219726 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:30:09.010395  219726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:30:09.010485  219726 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:30:09.010497  219726 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:30:09.010528  219726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:30:09.010590  219726 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:30:09.010597  219726 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:30:09.010624  219726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:30:09.010676  219726 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20211231102953-6736 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20211231102953-6736]
	I1231 10:30:09.283340  219726 provision.go:172] copyRemoteCerts
	I1231 10:30:09.283435  219726 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:30:09.283484  219726 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:30:09.337350  219726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49397 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:30:09.440676  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:30:09.463186  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I1231 10:30:09.485013  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1231 10:30:09.508442  219726 provision.go:86] duration metric: configureAuth took 551.277077ms
	I1231 10:30:09.508471  219726 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:30:09.508677  219726 config.go:176] Loaded profile config "embed-certs-20211231102953-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:30:09.508695  219726 machine.go:91] provisioned docker machine in 960.804304ms
	I1231 10:30:09.508703  219726 client.go:171] LocalClient.Create took 15.706168916s
	I1231 10:30:09.508724  219726 start.go:168] duration metric: libmachine.API.Create for "embed-certs-20211231102953-6736" took 15.706248821s
	I1231 10:30:09.508739  219726 start.go:267] post-start starting for "embed-certs-20211231102953-6736" (driver="docker")
	I1231 10:30:09.508745  219726 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:30:09.508811  219726 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:30:09.508887  219726 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:30:09.552519  219726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49397 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:30:09.653048  219726 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:30:09.656209  219726 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:30:09.656337  219726 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:30:09.656368  219726 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:30:09.656374  219726 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:30:09.656387  219726 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:30:09.656444  219726 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:30:09.656508  219726 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:30:09.656584  219726 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:30:09.664537  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:30:09.684913  219726 start.go:270] post-start completed in 176.156416ms
	I1231 10:30:09.685368  219726 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211231102953-6736
	I1231 10:30:09.730715  219726 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/config.json ...
	I1231 10:30:09.730996  219726 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:30:09.731047  219726 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:30:09.772008  219726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49397 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:30:09.874968  219726 start.go:129] duration metric: createHost completed in 16.077915345s
	I1231 10:30:09.875008  219726 start.go:80] releasing machines lock for "embed-certs-20211231102953-6736", held for 16.078206836s
	I1231 10:30:09.875092  219726 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211231102953-6736
	I1231 10:30:09.927414  219726 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:30:09.927420  219726 ssh_runner.go:195] Run: systemctl --version
	I1231 10:30:09.927485  219726 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:30:09.927564  219726 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:30:09.967583  219726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49397 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:30:09.979664  219726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49397 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:30:10.068879  219726 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:30:10.093867  219726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:30:10.106073  219726 docker.go:158] disabling docker service ...
	I1231 10:30:10.106133  219726 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:30:10.128894  219726 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:30:10.142227  219726 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:30:10.239117  219726 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:30:10.341506  219726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:30:10.353172  219726 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:30:10.369434  219726 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:30:10.390311  219726 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:30:10.409079  219726 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:30:10.421339  219726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:30:10.510170  219726 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:30:10.593325  219726 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:30:10.593473  219726 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:30:10.601186  219726 start.go:458] Will wait 60s for crictl version
	I1231 10:30:10.601328  219726 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:30:10.639912  219726 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:30:10Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:30:15.526785  216628 out.go:203]   - Configuring RBAC rules ...
	I1231 10:30:15.942609  216628 cni.go:93] Creating CNI manager for ""
	I1231 10:30:15.942637  216628 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:30:12.617324  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:14.617696  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:16.617970  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:15.947245  216628 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:30:15.947352  216628 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:30:15.953119  216628 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl ...
	I1231 10:30:15.953150  216628 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:30:15.969722  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:30:16.878441  216628 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:30:16.878517  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:16.878524  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=no-preload-20211231102928-6736 minikube.k8s.io/updated_at=2021_12_31T10_30_16_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:16.899568  216628 ops.go:34] apiserver oom_adj: -16
	I1231 10:30:16.986829  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:17.557246  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:18.057908  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:18.557579  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:21.688378  219726 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:30:21.714013  219726 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:30:21.714074  219726 ssh_runner.go:195] Run: containerd --version
	I1231 10:30:21.734634  219726 ssh_runner.go:195] Run: containerd --version
	I1231 10:30:21.762709  219726 out.go:176] * Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	I1231 10:30:21.762821  219726 cli_runner.go:133] Run: docker network inspect embed-certs-20211231102953-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:30:21.799238  219726 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1231 10:30:21.803366  219726 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:30:21.818263  219726 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:30:21.820836  219726 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:30:18.618038  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:20.618145  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:21.823252  219726 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:30:21.823330  219726 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:30:21.823388  219726 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:30:21.849713  219726 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:30:21.849739  219726 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:30:21.849780  219726 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:30:21.877125  219726 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:30:21.877150  219726 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:30:21.877194  219726 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:30:21.905205  219726 cni.go:93] Creating CNI manager for ""
	I1231 10:30:21.905228  219726 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:30:21.905240  219726 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:30:21.905251  219726 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20211231102953-6736 NodeName:embed-certs-20211231102953-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:30:21.905384  219726 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20211231102953-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:30:21.905475  219726 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=embed-certs-20211231102953-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1231 10:30:21.905533  219726 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.1
	I1231 10:30:21.913446  219726 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:30:21.913513  219726 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:30:21.921381  219726 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (638 bytes)
	I1231 10:30:21.936402  219726 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1231 10:30:21.952335  219726 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2081 bytes)
	I1231 10:30:21.968888  219726 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:30:21.972604  219726 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:30:21.982979  219726 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736 for IP: 192.168.58.2
	I1231 10:30:21.983113  219726 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:30:21.983168  219726 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:30:21.983238  219726 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/client.key
	I1231 10:30:21.983257  219726 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/client.crt with IP's: []
	I1231 10:30:22.106776  219726 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/client.crt ...
	I1231 10:30:22.106805  219726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/client.crt: {Name:mk0e03868cc7fe1f3dfcf9da79e287f6262395fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:30:22.107019  219726 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/client.key ...
	I1231 10:30:22.107036  219726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/client.key: {Name:mk515384d2d137e364a1ad1b8d3ee128168482e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:30:22.107147  219726 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.key.cee25041
	I1231 10:30:22.107172  219726 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1231 10:30:22.221000  219726 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.crt.cee25041 ...
	I1231 10:30:22.221038  219726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.crt.cee25041: {Name:mk87e0f8df7efed1ae47a75dc9cd5f3b398499ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:30:22.221261  219726 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.key.cee25041 ...
	I1231 10:30:22.221281  219726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.key.cee25041: {Name:mk2f4f21430846cda040c7943b85a47bdc4f75ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:30:22.221397  219726 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.crt
	I1231 10:30:22.221476  219726 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.key
	I1231 10:30:22.221545  219726 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.key
	I1231 10:30:22.221585  219726 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.crt with IP's: []
	I1231 10:30:22.311158  219726 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.crt ...
	I1231 10:30:22.311192  219726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.crt: {Name:mkf2db88e9496502bef29f04821dba5ff38cf63b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:30:22.311420  219726 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.key ...
	I1231 10:30:22.311448  219726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.key: {Name:mk1243477406232b534e0778e1cb4e2252483e33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:30:22.312115  219726 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:30:22.312173  219726 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:30:22.312184  219726 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:30:22.312215  219726 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:30:22.312273  219726 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:30:22.312307  219726 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:30:22.312364  219726 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:30:22.314205  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:30:22.334696  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1231 10:30:22.354429  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:30:22.374158  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1231 10:30:22.394468  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:30:22.414684  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:30:22.434624  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:30:22.454604  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:30:22.474147  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:30:22.495690  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:30:22.516417  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:30:22.537174  219726 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:30:22.551766  219726 ssh_runner.go:195] Run: openssl version
	I1231 10:30:22.557176  219726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:30:22.566306  219726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:30:22.570734  219726 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:30:22.570792  219726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:30:22.576546  219726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:30:22.585864  219726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:30:22.595296  219726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:30:22.598833  219726 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:30:22.598899  219726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:30:22.605189  219726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:30:22.615268  219726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:30:22.625714  219726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:30:22.629277  219726 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:30:22.629338  219726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:30:22.634780  219726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:30:22.643631  219726 kubeadm.go:388] StartCluster: {Name:embed-certs-20211231102953-6736 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:30:22.643738  219726 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:30:22.643789  219726 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:30:22.672067  219726 cri.go:87] found id: ""
	I1231 10:30:22.672126  219726 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:30:22.680710  219726 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:30:22.689387  219726 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:30:22.689452  219726 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:30:22.698676  219726 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:30:22.698754  219726 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:30:19.057223  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:19.557079  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:20.057516  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:20.557475  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:21.057563  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:21.556947  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:22.056996  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:22.557413  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:23.056968  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:23.557302  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:23.118084  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:25.120431  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:24.057508  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:24.557275  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:25.057412  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:25.557188  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:26.057491  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:26.557554  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:27.057046  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:27.557114  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:28.057168  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:28.557511  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:29.057783  216628 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:29.120277  216628 kubeadm.go:864] duration metric: took 12.24181684s to wait for elevateKubeSystemPrivileges.
	I1231 10:30:29.120317  216628 kubeadm.go:390] StartCluster complete in 31.227943072s
	I1231 10:30:29.120342  216628 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:30:29.120435  216628 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:30:29.122025  216628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:30:29.641532  216628 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20211231102928-6736" rescaled to 1
	I1231 10:30:29.641587  216628 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.2-rc.0 ControlPlane:true Worker:true}
	I1231 10:30:29.645486  216628 out.go:176] * Verifying Kubernetes components...
	I1231 10:30:29.641688  216628 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I1231 10:30:29.641818  216628 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:30:29.641897  216628 config.go:176] Loaded profile config "no-preload-20211231102928-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2-rc.0
	I1231 10:30:29.645600  216628 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20211231102928-6736"
	I1231 10:30:29.645623  216628 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20211231102928-6736"
	W1231 10:30:29.645628  216628 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:30:29.645655  216628 host.go:66] Checking if "no-preload-20211231102928-6736" exists ...
	I1231 10:30:29.645651  216628 addons.go:65] Setting default-storageclass=true in profile "no-preload-20211231102928-6736"
	I1231 10:30:29.645682  216628 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20211231102928-6736"
	I1231 10:30:29.645662  216628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:30:29.645970  216628 cli_runner.go:133] Run: docker container inspect no-preload-20211231102928-6736 --format={{.State.Status}}
	I1231 10:30:29.646111  216628 cli_runner.go:133] Run: docker container inspect no-preload-20211231102928-6736 --format={{.State.Status}}
	I1231 10:30:29.707621  216628 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:30:29.705352  216628 addons.go:153] Setting addon default-storageclass=true in "no-preload-20211231102928-6736"
	W1231 10:30:29.707724  216628 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:30:29.707765  216628 host.go:66] Checking if "no-preload-20211231102928-6736" exists ...
	I1231 10:30:29.707805  216628 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:30:29.707818  216628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:30:29.707877  216628 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211231102928-6736
	I1231 10:30:29.708373  216628 cli_runner.go:133] Run: docker container inspect no-preload-20211231102928-6736 --format={{.State.Status}}
	I1231 10:30:29.755815  216628 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:30:29.755845  216628 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:30:29.755895  216628 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20211231102928-6736
	I1231 10:30:29.757557  216628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/no-preload-20211231102928-6736/id_rsa Username:docker}
	I1231 10:30:29.780579  216628 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:30:29.782713  216628 node_ready.go:35] waiting up to 6m0s for node "no-preload-20211231102928-6736" to be "Ready" ...
	I1231 10:30:29.802211  216628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/no-preload-20211231102928-6736/id_rsa Username:docker}
	I1231 10:30:30.002766  216628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:30:30.098557  216628 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:30:30.296394  216628 start.go:773] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I1231 10:30:27.617851  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:29.617978  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:32.117705  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:30.591731  216628 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I1231 10:30:30.591897  216628 addons.go:417] enableAddons completed in 950.204656ms
	I1231 10:30:31.794221  216628 node_ready.go:58] node "no-preload-20211231102928-6736" has status "Ready":"False"
	I1231 10:30:34.617641  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:37.118058  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:38.173908  219726 out.go:203]   - Generating certificates and keys ...
	I1231 10:30:38.178218  219726 out.go:203]   - Booting up control plane ...
	I1231 10:30:38.181459  219726 out.go:203]   - Configuring RBAC rules ...
	I1231 10:30:38.183420  219726 cni.go:93] Creating CNI manager for ""
	I1231 10:30:38.183445  219726 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:30:38.185734  219726 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:30:38.185817  219726 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:30:38.190367  219726 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.1/kubectl ...
	I1231 10:30:38.190395  219726 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:30:38.205669  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:30:34.293674  216628 node_ready.go:58] node "no-preload-20211231102928-6736" has status "Ready":"False"
	I1231 10:30:36.293755  216628 node_ready.go:58] node "no-preload-20211231102928-6736" has status "Ready":"False"
	I1231 10:30:39.617030  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:41.618652  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:39.153015  219726 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:30:39.153139  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:39.153139  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=embed-certs-20211231102953-6736 minikube.k8s.io/updated_at=2021_12_31T10_30_39_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:39.281905  219726 ops.go:34] apiserver oom_adj: -16
	I1231 10:30:39.282099  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:39.845525  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:40.344860  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:40.845513  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:41.345034  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:41.845526  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:42.345451  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:42.845489  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:38.793840  216628 node_ready.go:58] node "no-preload-20211231102928-6736" has status "Ready":"False"
	I1231 10:30:41.292969  216628 node_ready.go:49] node "no-preload-20211231102928-6736" has status "Ready":"True"
	I1231 10:30:41.293007  216628 node_ready.go:38] duration metric: took 11.510262814s waiting for node "no-preload-20211231102928-6736" to be "Ready" ...
	I1231 10:30:41.293021  216628 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1231 10:30:41.306481  216628 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-mx7df" in "kube-system" namespace to be "Ready" ...
	I1231 10:30:42.329421  216628 pod_ready.go:92] pod "coredns-64897985d-mx7df" in "kube-system" namespace has status "Ready":"True"
	I1231 10:30:42.329484  216628 pod_ready.go:81] duration metric: took 1.022962613s waiting for pod "coredns-64897985d-mx7df" in "kube-system" namespace to be "Ready" ...
	I1231 10:30:42.329498  216628 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20211231102928-6736" in "kube-system" namespace to be "Ready" ...
	I1231 10:30:42.335376  216628 pod_ready.go:92] pod "etcd-no-preload-20211231102928-6736" in "kube-system" namespace has status "Ready":"True"
	I1231 10:30:42.335399  216628 pod_ready.go:81] duration metric: took 5.893006ms waiting for pod "etcd-no-preload-20211231102928-6736" in "kube-system" namespace to be "Ready" ...
	I1231 10:30:42.335416  216628 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20211231102928-6736" in "kube-system" namespace to be "Ready" ...
	I1231 10:30:42.341325  216628 pod_ready.go:92] pod "kube-apiserver-no-preload-20211231102928-6736" in "kube-system" namespace has status "Ready":"True"
	I1231 10:30:42.341351  216628 pod_ready.go:81] duration metric: took 5.927545ms waiting for pod "kube-apiserver-no-preload-20211231102928-6736" in "kube-system" namespace to be "Ready" ...
	I1231 10:30:42.341365  216628 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20211231102928-6736" in "kube-system" namespace to be "Ready" ...
	I1231 10:30:42.346928  216628 pod_ready.go:92] pod "kube-controller-manager-no-preload-20211231102928-6736" in "kube-system" namespace has status "Ready":"True"
	I1231 10:30:42.346961  216628 pod_ready.go:81] duration metric: took 5.587624ms waiting for pod "kube-controller-manager-no-preload-20211231102928-6736" in "kube-system" namespace to be "Ready" ...
	I1231 10:30:42.346977  216628 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-77xld" in "kube-system" namespace to be "Ready" ...
	I1231 10:30:42.493833  216628 pod_ready.go:92] pod "kube-proxy-77xld" in "kube-system" namespace has status "Ready":"True"
	I1231 10:30:42.493859  216628 pod_ready.go:81] duration metric: took 146.873024ms waiting for pod "kube-proxy-77xld" in "kube-system" namespace to be "Ready" ...
	I1231 10:30:42.493872  216628 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20211231102928-6736" in "kube-system" namespace to be "Ready" ...
	I1231 10:30:42.894051  216628 pod_ready.go:92] pod "kube-scheduler-no-preload-20211231102928-6736" in "kube-system" namespace has status "Ready":"True"
	I1231 10:30:42.894077  216628 pod_ready.go:81] duration metric: took 400.196716ms waiting for pod "kube-scheduler-no-preload-20211231102928-6736" in "kube-system" namespace to be "Ready" ...
	I1231 10:30:42.894092  216628 pod_ready.go:38] duration metric: took 1.601056721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1231 10:30:42.894116  216628 api_server.go:51] waiting for apiserver process to appear ...
	I1231 10:30:42.894162  216628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1231 10:30:42.917163  216628 api_server.go:71] duration metric: took 13.275543936s to wait for apiserver process to appear ...
	I1231 10:30:42.917208  216628 api_server.go:87] waiting for apiserver healthz status ...
	I1231 10:30:42.917221  216628 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1231 10:30:42.922691  216628 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1231 10:30:42.923704  216628 api_server.go:140] control plane version: v1.23.2-rc.0
	I1231 10:30:42.923829  216628 api_server.go:130] duration metric: took 6.610217ms to wait for apiserver health ...
	I1231 10:30:42.923858  216628 system_pods.go:43] waiting for kube-system pods to appear ...
	I1231 10:30:43.096213  216628 system_pods.go:59] 8 kube-system pods found
	I1231 10:30:43.096305  216628 system_pods.go:61] "coredns-64897985d-mx7df" [fcec5084-c4aa-4435-a41b-812f065ff736] Running
	I1231 10:30:43.096320  216628 system_pods.go:61] "etcd-no-preload-20211231102928-6736" [7de97d6e-060d-4e43-a526-43e3bcfc7eb5] Running
	I1231 10:30:43.096324  216628 system_pods.go:61] "kindnet-vpk25" [2f179eb2-b2ad-4051-bc1d-7c6ec958e795] Running
	I1231 10:30:43.096328  216628 system_pods.go:61] "kube-apiserver-no-preload-20211231102928-6736" [5a48c3d6-a630-43ab-89a5-4279d3f20a8a] Running
	I1231 10:30:43.096332  216628 system_pods.go:61] "kube-controller-manager-no-preload-20211231102928-6736" [4b52cbd7-1d9d-4290-807d-aecf71504fb0] Running
	I1231 10:30:43.096336  216628 system_pods.go:61] "kube-proxy-77xld" [1f6006f1-fa6f-4e0c-929f-bcfda174bd51] Running
	I1231 10:30:43.096340  216628 system_pods.go:61] "kube-scheduler-no-preload-20211231102928-6736" [7e15f439-33e8-4469-bf2d-4d3d325c29f4] Running
	I1231 10:30:43.096343  216628 system_pods.go:61] "storage-provisioner" [38af69a2-b36d-4a0b-8721-2ea34e6b6e47] Running
	I1231 10:30:43.096348  216628 system_pods.go:74] duration metric: took 172.485447ms to wait for pod list to return data ...
	I1231 10:30:43.096362  216628 default_sa.go:34] waiting for default service account to be created ...
	I1231 10:30:43.295441  216628 default_sa.go:45] found service account: "default"
	I1231 10:30:43.295489  216628 default_sa.go:55] duration metric: took 199.11991ms for default service account to be created ...
	I1231 10:30:43.295521  216628 system_pods.go:116] waiting for k8s-apps to be running ...
	I1231 10:30:43.495982  216628 system_pods.go:86] 8 kube-system pods found
	I1231 10:30:43.496013  216628 system_pods.go:89] "coredns-64897985d-mx7df" [fcec5084-c4aa-4435-a41b-812f065ff736] Running
	I1231 10:30:43.496019  216628 system_pods.go:89] "etcd-no-preload-20211231102928-6736" [7de97d6e-060d-4e43-a526-43e3bcfc7eb5] Running
	I1231 10:30:43.496023  216628 system_pods.go:89] "kindnet-vpk25" [2f179eb2-b2ad-4051-bc1d-7c6ec958e795] Running
	I1231 10:30:43.496028  216628 system_pods.go:89] "kube-apiserver-no-preload-20211231102928-6736" [5a48c3d6-a630-43ab-89a5-4279d3f20a8a] Running
	I1231 10:30:43.496032  216628 system_pods.go:89] "kube-controller-manager-no-preload-20211231102928-6736" [4b52cbd7-1d9d-4290-807d-aecf71504fb0] Running
	I1231 10:30:43.496035  216628 system_pods.go:89] "kube-proxy-77xld" [1f6006f1-fa6f-4e0c-929f-bcfda174bd51] Running
	I1231 10:30:43.496039  216628 system_pods.go:89] "kube-scheduler-no-preload-20211231102928-6736" [7e15f439-33e8-4469-bf2d-4d3d325c29f4] Running
	I1231 10:30:43.496042  216628 system_pods.go:89] "storage-provisioner" [38af69a2-b36d-4a0b-8721-2ea34e6b6e47] Running
	I1231 10:30:43.496048  216628 system_pods.go:126] duration metric: took 200.500882ms to wait for k8s-apps to be running ...
	I1231 10:30:43.496054  216628 system_svc.go:44] waiting for kubelet service to be running ....
	I1231 10:30:43.496100  216628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:30:43.507340  216628 system_svc.go:56] duration metric: took 11.273356ms WaitForService to wait for kubelet.
	I1231 10:30:43.507374  216628 kubeadm.go:542] duration metric: took 13.865763765s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1231 10:30:43.507405  216628 node_conditions.go:102] verifying NodePressure condition ...
	I1231 10:30:43.693864  216628 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I1231 10:30:43.693893  216628 node_conditions.go:123] node cpu capacity is 8
	I1231 10:30:43.693916  216628 node_conditions.go:105] duration metric: took 186.506694ms to run NodePressure ...
	I1231 10:30:43.693927  216628 start.go:211] waiting for startup goroutines ...
	I1231 10:30:43.730043  216628 start.go:493] kubectl: 1.23.1, cluster: 1.23.2-rc.0 (minor skew: 0)
	I1231 10:30:43.733004  216628 out.go:176] * Done! kubectl is now configured to use "no-preload-20211231102928-6736" cluster and "default" namespace by default
	I1231 10:30:44.118016  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:46.618570  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:43.345140  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:43.844830  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:44.344899  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:44.845153  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:45.345227  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:45.845302  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:46.345154  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:46.845391  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:47.345449  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:47.845519  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:49.118277  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:51.618149  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:48.345688  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:48.845088  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:49.345699  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:49.845346  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:50.345578  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:50.845507  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:51.345522  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:51.845536  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:52.345479  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:52.497499  219726 kubeadm.go:864] duration metric: took 13.34441416s to wait for elevateKubeSystemPrivileges.
	I1231 10:30:52.497540  219726 kubeadm.go:390] StartCluster complete in 29.853919823s
	I1231 10:30:52.497564  219726 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:30:52.497694  219726 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:30:52.500090  219726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:30:53.023863  219726 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20211231102953-6736" rescaled to 1
	I1231 10:30:53.023925  219726 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:30:53.026242  219726 out.go:176] * Verifying Kubernetes components...
	I1231 10:30:53.026320  219726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:30:53.023971  219726 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:30:53.024000  219726 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I1231 10:30:53.026419  219726 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20211231102953-6736"
	I1231 10:30:53.024386  219726 config.go:176] Loaded profile config "embed-certs-20211231102953-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:30:53.026452  219726 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20211231102953-6736"
	I1231 10:30:53.026467  219726 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20211231102953-6736"
	I1231 10:30:53.026447  219726 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20211231102953-6736"
	W1231 10:30:53.026534  219726 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:30:53.026572  219726 host.go:66] Checking if "embed-certs-20211231102953-6736" exists ...
	I1231 10:30:53.026871  219726 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:30:53.027068  219726 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:30:53.082105  219726 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:30:53.080041  219726 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20211231102953-6736"
	W1231 10:30:53.082139  219726 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:30:53.082173  219726 host.go:66] Checking if "embed-certs-20211231102953-6736" exists ...
	I1231 10:30:53.082262  219726 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:30:53.082283  219726 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:30:53.082340  219726 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:30:53.082693  219726 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:30:53.120606  219726 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:30:53.123669  219726 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20211231102953-6736" to be "Ready" ...
	I1231 10:30:53.134370  219726 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:30:53.134399  219726 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:30:53.134450  219726 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:30:53.137195  219726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49397 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:30:53.194741  219726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49397 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:30:53.291532  219726 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:30:53.398441  219726 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:30:53.591519  219726 start.go:773] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I1231 10:30:54.117720  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:56.618943  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:53.826074  219726 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I1231 10:30:53.826115  219726 addons.go:417] enableAddons completed in 802.130055ms
	I1231 10:30:55.133779  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:30:57.632633  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:30:58.620822  207964 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:30:58.620862  207964 node_ready.go:38] duration metric: took 4m0.010971187s waiting for node "old-k8s-version-20211231102602-6736" to be "Ready" ...
	I1231 10:30:58.623527  207964 out.go:176] 
	W1231 10:30:58.623707  207964 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:30:58.623722  207964 out.go:241] * 
	W1231 10:30:58.624497  207964 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	56e208454f919       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   04b659f964be5
	f3c41a92f3dcb       6de166512aa22       4 minutes ago        Exited              kindnet-cni               0                   04b659f964be5
	91f9570ac5962       c21b0c7400f98       4 minutes ago        Running             kube-proxy                0                   6f185cd7b6c56
	090a101afa0e5       b2756210eeabf       4 minutes ago        Running             etcd                      0                   295e37d445215
	a0fea282c2cab       b305571ca60a5       4 minutes ago        Running             kube-apiserver            0                   410795c7cb2b8
	fddc6f96e1ab6       301ddc62b80b1       4 minutes ago        Running             kube-scheduler            0                   e70ba3548c048
	c5161903fa798       06a629a7e51cd       4 minutes ago        Running             kube-controller-manager   0                   3badc9a2068b0
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-12-31 10:26:14 UTC, end at Fri 2021-12-31 10:30:59 UTC. --
	Dec 31 10:26:33 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:26:33.303867024Z" level=info msg="StartContainer for \"a0fea282c2cab22ac98fca2724b604a81ad02188a7148a610d923ce9704541fe\" returns successfully"
	Dec 31 10:26:57 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:26:57.421512284Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Dec 31 10:26:58 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:26:58.188396137Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-hdtr6,Uid:f37dfb0d-e6fa-4cb6-ac3d-8459d648ae54,Namespace:kube-system,Attempt:0,}"
	Dec 31 10:26:58 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:26:58.196174147Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-gjbqc,Uid:a71bd990-5819-4720-aba3-d5cdc1c779dd,Namespace:kube-system,Attempt:0,}"
	Dec 31 10:26:58 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:26:58.214137818Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f185cd7b6c56b4c4661d5c06b7b29a3468f5196a6d31602a4745eae903620d1 pid=1759
	Dec 31 10:26:58 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:26:58.222695437Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/04b659f964be5884b7c46b53d59c3b0a04d83f2ed13fcf0dbb6db8356fd79926 pid=1780
	Dec 31 10:26:58 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:26:58.281029390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hdtr6,Uid:f37dfb0d-e6fa-4cb6-ac3d-8459d648ae54,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f185cd7b6c56b4c4661d5c06b7b29a3468f5196a6d31602a4745eae903620d1\""
	Dec 31 10:26:58 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:26:58.284166690Z" level=info msg="CreateContainer within sandbox \"6f185cd7b6c56b4c4661d5c06b7b29a3468f5196a6d31602a4745eae903620d1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Dec 31 10:26:58 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:26:58.404679678Z" level=info msg="CreateContainer within sandbox \"6f185cd7b6c56b4c4661d5c06b7b29a3468f5196a6d31602a4745eae903620d1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"91f9570ac59622f25f47f9d8bf9cfa1ecd2c14cb7722777629a959e7f187d512\""
	Dec 31 10:26:58 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:26:58.405710982Z" level=info msg="StartContainer for \"91f9570ac59622f25f47f9d8bf9cfa1ecd2c14cb7722777629a959e7f187d512\""
	Dec 31 10:26:58 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:26:58.409304049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-gjbqc,Uid:a71bd990-5819-4720-aba3-d5cdc1c779dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"04b659f964be5884b7c46b53d59c3b0a04d83f2ed13fcf0dbb6db8356fd79926\""
	Dec 31 10:26:58 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:26:58.413448595Z" level=info msg="CreateContainer within sandbox \"04b659f964be5884b7c46b53d59c3b0a04d83f2ed13fcf0dbb6db8356fd79926\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Dec 31 10:26:58 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:26:58.498224015Z" level=info msg="CreateContainer within sandbox \"04b659f964be5884b7c46b53d59c3b0a04d83f2ed13fcf0dbb6db8356fd79926\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"f3c41a92f3dcb0d899b68d0a3105a9d8038905ebe927dc4ccaa705e9c46d75e7\""
	Dec 31 10:26:58 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:26:58.498936923Z" level=info msg="StartContainer for \"f3c41a92f3dcb0d899b68d0a3105a9d8038905ebe927dc4ccaa705e9c46d75e7\""
	Dec 31 10:26:58 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:26:58.585063558Z" level=info msg="StartContainer for \"91f9570ac59622f25f47f9d8bf9cfa1ecd2c14cb7722777629a959e7f187d512\" returns successfully"
	Dec 31 10:26:58 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:26:58.891261448Z" level=info msg="StartContainer for \"f3c41a92f3dcb0d899b68d0a3105a9d8038905ebe927dc4ccaa705e9c46d75e7\" returns successfully"
	Dec 31 10:29:39 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:29:39.297028427Z" level=info msg="Finish piping stdout of container \"f3c41a92f3dcb0d899b68d0a3105a9d8038905ebe927dc4ccaa705e9c46d75e7\""
	Dec 31 10:29:39 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:29:39.297148762Z" level=info msg="Finish piping stderr of container \"f3c41a92f3dcb0d899b68d0a3105a9d8038905ebe927dc4ccaa705e9c46d75e7\""
	Dec 31 10:29:39 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:29:39.299360650Z" level=info msg="TaskExit event &TaskExit{ContainerID:f3c41a92f3dcb0d899b68d0a3105a9d8038905ebe927dc4ccaa705e9c46d75e7,ID:f3c41a92f3dcb0d899b68d0a3105a9d8038905ebe927dc4ccaa705e9c46d75e7,Pid:1915,ExitStatus:2,ExitedAt:2021-12-31 10:29:39.298781133 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:29:41 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:29:41.693996606Z" level=info msg="shim disconnected" id=f3c41a92f3dcb0d899b68d0a3105a9d8038905ebe927dc4ccaa705e9c46d75e7
	Dec 31 10:29:41 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:29:41.694104797Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:29:42 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:29:42.450482721Z" level=info msg="CreateContainer within sandbox \"04b659f964be5884b7c46b53d59c3b0a04d83f2ed13fcf0dbb6db8356fd79926\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Dec 31 10:29:42 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:29:42.524485611Z" level=info msg="CreateContainer within sandbox \"04b659f964be5884b7c46b53d59c3b0a04d83f2ed13fcf0dbb6db8356fd79926\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"56e208454f91938e954714cf0c234ee41095d874aaba052186b5db7c08821fa3\""
	Dec 31 10:29:42 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:29:42.525121864Z" level=info msg="StartContainer for \"56e208454f91938e954714cf0c234ee41095d874aaba052186b5db7c08821fa3\""
	Dec 31 10:29:42 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:29:42.712533611Z" level=info msg="StartContainer for \"56e208454f91938e954714cf0c234ee41095d874aaba052186b5db7c08821fa3\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20211231102602-6736
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20211231102602-6736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb
	                    minikube.k8s.io/name=old-k8s-version-20211231102602-6736
	                    minikube.k8s.io/updated_at=2021_12_31T10_26_42_0700
	                    minikube.k8s.io/version=v1.24.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 Dec 2021 10:26:37 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 Dec 2021 10:30:38 +0000   Fri, 31 Dec 2021 10:26:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 Dec 2021 10:30:38 +0000   Fri, 31 Dec 2021 10:26:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 Dec 2021 10:30:38 +0000   Fri, 31 Dec 2021 10:26:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 31 Dec 2021 10:30:38 +0000   Fri, 31 Dec 2021 10:26:33 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    old-k8s-version-20211231102602-6736
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32879780Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32879780Ki
	 pods:               110
	System Info:
	 Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	 System UUID:                5a8cca94-3bdf-4013-adda-72ef27798431
	 Boot ID:                    e6f4bca3-a61e-4e63-a68d-0c73a5c2c999
	 Kernel Version:             5.11.0-1023-gcp
	 OS Image:                   Ubuntu 20.04.2 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.4.12
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-20211231102602-6736                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m3s
	  kube-system                kindnet-gjbqc                                                  100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                kube-apiserver-old-k8s-version-20211231102602-6736             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m58s
	  kube-system                kube-controller-manager-old-k8s-version-20211231102602-6736    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  kube-system                kube-proxy-hdtr6                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                kube-scheduler-old-k8s-version-20211231102602-6736             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                             Message
	  ----    ------                   ----                   ----                                             -------
	  Normal  Starting                 4m27s                  kubelet, old-k8s-version-20211231102602-6736     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m27s (x8 over 4m27s)  kubelet, old-k8s-version-20211231102602-6736     Node old-k8s-version-20211231102602-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s (x8 over 4m27s)  kubelet, old-k8s-version-20211231102602-6736     Node old-k8s-version-20211231102602-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s (x7 over 4m27s)  kubelet, old-k8s-version-20211231102602-6736     Node old-k8s-version-20211231102602-6736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m27s                  kubelet, old-k8s-version-20211231102602-6736     Updated Node Allocatable limit across pods
	  Normal  Starting                 4m1s                   kube-proxy, old-k8s-version-20211231102602-6736  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.951870] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.019818] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023899] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[ +23.875945] IPv4: martian source 10.244.0.2 from 10.244.0.2, on dev veth86742aa3
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4a f1 3b 4b a3 2f 08 06
	[  +2.695360] IPv4: martian source 10.244.0.3 from 10.244.0.3, on dev vethf512694d
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe c0 4d 65 97 da 08 06
	[  +7.350477] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.015589] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +0.451647] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev veth7caea1a4
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 82 72 93 e8 96 a7 08 06
	[  +0.572352] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.959830] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.007824] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.027971] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	
	* 
	* ==> etcd [090a101afa0e5be4c178038538c1438ae269f1339bb853fc4beb2973fd8f69c6] <==
	* 2021-12-31 10:26:33.398007 W | auth: simple token is not cryptographically signed
	2021-12-31 10:26:33.402390 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2021-12-31 10:26:33.402842 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2021-12-31 10:26:33.403290 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-12-31 10:26:33.404899 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-12-31 10:26:33.405089 I | embed: listening for metrics on http://192.168.49.2:2381
	2021-12-31 10:26:33.405340 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-12-31 10:26:34.390892 I | raft: aec36adc501070cc is starting a new election at term 1
	2021-12-31 10:26:34.390937 I | raft: aec36adc501070cc became candidate at term 2
	2021-12-31 10:26:34.390955 I | raft: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	2021-12-31 10:26:34.390970 I | raft: aec36adc501070cc became leader at term 2
	2021-12-31 10:26:34.390978 I | raft: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-12-31 10:26:34.391151 I | etcdserver: setting up the initial cluster version to 3.3
	2021-12-31 10:26:34.392751 N | etcdserver/membership: set the initial cluster version to 3.3
	2021-12-31 10:26:34.392798 I | etcdserver/api: enabled capabilities for version 3.3
	2021-12-31 10:26:34.392812 I | embed: ready to serve client requests
	2021-12-31 10:26:34.392846 I | etcdserver: published {Name:old-k8s-version-20211231102602-6736 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-12-31 10:26:34.392895 I | embed: ready to serve client requests
	2021-12-31 10:26:34.395926 I | embed: serving client requests on 127.0.0.1:2379
	2021-12-31 10:26:34.396034 I | embed: serving client requests on 192.168.49.2:2379
	2021-12-31 10:29:41.263024 W | etcdserver: request "header:<ID:8128010034796901496 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.2\" mod_revision:446 > success:<request_put:<key:\"/registry/masterleases/192.168.49.2\" value_size:67 lease:8128010034796901494 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.2\" > >>" with result "size:16" took too long (169.430776ms) to execute
	2021-12-31 10:29:41.263180 W | etcdserver: read-only range request "key:\"/registry/jobs\" range_end:\"/registry/jobt\" count_only:true " with result "range_response_count:0 size:5" took too long (170.047688ms) to execute
	2021-12-31 10:29:41.361952 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20211231102602-6736\" " with result "range_response_count:1 size:3551" took too long (245.828121ms) to execute
	2021-12-31 10:29:41.569861 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:215" took too long (305.209221ms) to execute
	2021-12-31 10:29:56.808390 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20211231102602-6736\" " with result "range_response_count:1 size:3551" took too long (191.596053ms) to execute
	
	* 
	* ==> kernel <==
	*  10:31:00 up  1:13,  0 users,  load average: 2.18, 2.66, 2.84
	Linux old-k8s-version-20211231102602-6736 5.11.0-1023-gcp #25~20.04.1-Ubuntu SMP Mon Nov 15 15:54:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [a0fea282c2cab22ac98fca2724b604a81ad02188a7148a610d923ce9704541fe] <==
	* I1231 10:26:37.667487       1 customresource_discovery_controller.go:208] Starting DiscoveryController
	I1231 10:26:37.667494       1 naming_controller.go:288] Starting NamingConditionController
	I1231 10:26:37.667500       1 establishing_controller.go:73] Starting EstablishingController
	E1231 10:26:37.691341       1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1231 10:26:37.778842       1 cache.go:39] Caches are synced for autoregister controller
	I1231 10:26:37.778919       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I1231 10:26:37.779139       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1231 10:26:37.779195       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1231 10:26:38.665987       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I1231 10:26:38.666024       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1231 10:26:38.666037       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1231 10:26:38.670056       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I1231 10:26:38.673207       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I1231 10:26:38.673236       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1231 10:26:40.447719       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1231 10:26:40.727470       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1231 10:26:41.010135       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1231 10:26:41.011069       1 controller.go:606] quota admission added evaluator for: endpoints
	I1231 10:26:41.096654       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1231 10:26:41.899110       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1231 10:26:42.121597       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1231 10:26:42.438856       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1231 10:26:57.699475       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1231 10:26:57.711823       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I1231 10:26:57.751097       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [c5161903fa79820ba4aac6aae4e2aa2335944ccae08a80bec50f7a09bcb290a0] <==
	* I1231 10:26:57.615461       1 shared_informer.go:204] Caches are synced for PV protection 
	I1231 10:26:57.656384       1 shared_informer.go:204] Caches are synced for persistent volume 
	I1231 10:26:57.682073       1 shared_informer.go:204] Caches are synced for ReplicaSet 
	I1231 10:26:57.697117       1 shared_informer.go:204] Caches are synced for deployment 
	I1231 10:26:57.700183       1 shared_informer.go:204] Caches are synced for attach detach 
	I1231 10:26:57.702252       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"c8554f51-52fd-4f6a-8e2b-35d79db7d7fa", APIVersion:"apps/v1", ResourceVersion:"201", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 2
	I1231 10:26:57.703189       1 shared_informer.go:204] Caches are synced for expand 
	I1231 10:26:57.709182       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"8967477e-14f1-4c1f-8c2b-7a0fe862d229", APIVersion:"apps/v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-g95dr
	I1231 10:26:57.717504       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"8967477e-14f1-4c1f-8c2b-7a0fe862d229", APIVersion:"apps/v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-cqjc7
	I1231 10:26:57.738063       1 shared_informer.go:204] Caches are synced for disruption 
	I1231 10:26:57.738105       1 disruption.go:341] Sending events to api server.
	I1231 10:26:57.747623       1 shared_informer.go:204] Caches are synced for daemon sets 
	I1231 10:26:57.787815       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"bdccb7fa-0064-4ee0-9ebc-fa377e485696", APIVersion:"apps/v1", ResourceVersion:"232", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-gjbqc
	I1231 10:26:57.787853       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"4d7e5ca0-b554-4379-9229-5965d7d0d5ba", APIVersion:"apps/v1", ResourceVersion:"215", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-hdtr6
	I1231 10:26:57.808977       1 shared_informer.go:204] Caches are synced for stateful set 
	I1231 10:26:57.809469       1 shared_informer.go:204] Caches are synced for garbage collector 
	I1231 10:26:57.809546       1 shared_informer.go:204] Caches are synced for resource quota 
	E1231 10:26:57.815277       1 daemon_controller.go:302] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"bdccb7fa-0064-4ee0-9ebc-fa377e485696", ResourceVersion:"232", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63776543202, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerati
ons\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.mk\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000627120), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:
[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000627140), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.Vsphere
VirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000627180), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolume
Source)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0006271e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)
(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.
Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000627220)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000627640)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resou
rce.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000ac17c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.Eph
emeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00136cc98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0012cc5a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.Resou
rceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000a3e020)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00136cce0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	E1231 10:26:57.816055       1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"4d7e5ca0-b554-4379-9229-5965d7d0d5ba", ResourceVersion:"215", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63776543202, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000626d00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Names
pace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeS
ource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001a48740), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000626d80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000626da0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.A
zureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.16.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000626ea0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMo
de)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000ac1680), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00136ca78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServic
eAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0012cc420), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy
{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000a3e018)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00136cab8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I1231 10:26:57.878955       1 shared_informer.go:204] Caches are synced for garbage collector 
	I1231 10:26:57.879123       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1231 10:26:57.889624       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"c8554f51-52fd-4f6a-8e2b-35d79db7d7fa", APIVersion:"apps/v1", ResourceVersion:"366", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-5644d7b6d9 to 1
	I1231 10:26:57.907023       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"8967477e-14f1-4c1f-8c2b-7a0fe862d229", APIVersion:"apps/v1", ResourceVersion:"367", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-5644d7b6d9-g95dr
	I1231 10:26:58.905952       1 shared_informer.go:197] Waiting for caches to sync for resource quota
	I1231 10:26:59.006259       1 shared_informer.go:204] Caches are synced for resource quota 
	
	* 
	* ==> kube-proxy [91f9570ac59622f25f47f9d8bf9cfa1ecd2c14cb7722777629a959e7f187d512] <==
	* W1231 10:26:58.698313       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1231 10:26:58.707406       1 node.go:135] Successfully retrieved node IP: 192.168.49.2
	I1231 10:26:58.707466       1 server_others.go:149] Using iptables Proxier.
	I1231 10:26:58.708676       1 server.go:529] Version: v1.16.0
	I1231 10:26:58.709278       1 config.go:131] Starting endpoints config controller
	I1231 10:26:58.709318       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1231 10:26:58.709660       1 config.go:313] Starting service config controller
	I1231 10:26:58.709692       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1231 10:26:58.809529       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1231 10:26:58.809853       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [fddc6f96e1ab6aff7257a3f3e9e946ae7b0d808bbca6e09ffc2653e63aa5c9e4] <==
	* E1231 10:26:37.805362       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1231 10:26:37.806980       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1231 10:26:37.807074       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:26:37.807236       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:26:37.882619       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1231 10:26:37.882673       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1231 10:26:37.882844       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:26:37.882953       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1231 10:26:37.883629       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1231 10:26:37.884708       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:26:37.885065       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:26:38.807000       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1231 10:26:38.808398       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1231 10:26:38.809398       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:26:38.810390       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:26:38.884131       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1231 10:26:38.885117       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1231 10:26:38.886254       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:26:38.887753       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1231 10:26:38.888713       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1231 10:26:38.889525       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:26:38.890428       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:26:57.722315       1 factory.go:585] pod is already present in the activeQ
	E1231 10:26:57.792300       1 factory.go:585] pod is already present in the activeQ
	E1231 10:26:59.497262       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-12-31 10:26:14 UTC, end at Fri 2021-12-31 10:31:00 UTC. --
	Dec 31 10:29:57 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:29:57.229087     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:30:02 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:02.230148     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:30:02 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:02.991973     863 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:30:02 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:02.992034     863 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:30:07 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:07.231059     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:30:12 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:12.232390     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:30:13 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:13.022673     863 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:30:13 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:13.022710     863 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:30:17 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:17.233289     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:30:22 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:22.234414     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:30:23 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:23.052571     863 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:30:23 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:23.052615     863 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:30:27 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:27.235222     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:30:32 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:32.236273     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:30:33 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:33.088133     863 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:30:33 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:33.088175     863 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:30:37 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:37.237117     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:30:42 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:42.237852     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:30:43 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:43.121650     863 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:30:43 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:43.121703     863 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:30:47 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:47.239945     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:30:52 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:52.240916     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:30:53 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:53.167615     863 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:30:53 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:53.167694     863 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:30:57 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:30:57.241893     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20211231102602-6736 -n old-k8s-version-20211231102602-6736
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20211231102602-6736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-5644d7b6d9-cqjc7 storage-provisioner
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20211231102602-6736 describe pod coredns-5644d7b6d9-cqjc7 storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20211231102602-6736 describe pod coredns-5644d7b6d9-cqjc7 storage-provisioner: exit status 1 (51.413288ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-cqjc7" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20211231102602-6736 describe pod coredns-5644d7b6d9-cqjc7 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (298.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (360.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.155354215s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default
E1231 10:26:44.028040    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150042856s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default
E1231 10:26:58.679245    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory
E1231 10:26:59.313385    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory
E1231 10:26:59.318742    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory
E1231 10:26:59.329060    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory
E1231 10:26:59.349446    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory
E1231 10:26:59.389812    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory
E1231 10:26:59.470252    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory
E1231 10:26:59.630824    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory
E1231 10:26:59.951499    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory
E1231 10:27:00.592573    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory
E1231 10:27:01.872825    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory
E1231 10:27:04.433461    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory
E1231 10:27:09.553635    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.138022497s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default
E1231 10:27:19.794110    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.126685529s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default
E1231 10:27:39.269565    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
E1231 10:27:40.274290    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146225819s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137770474s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1231 10:28:13.359497    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.155558985s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.16944943s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.141053334s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1231 10:29:41.556960    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
E1231 10:29:43.155292    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default
E1231 10:30:10.310612    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.153534863s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1231 10:30:22.105416    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory
E1231 10:30:36.756353    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory
E1231 10:30:42.315644    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.168305138s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.186191683s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (360.10s)
E1231 10:35:10.310674    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
E1231 10:35:22.104999    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory
E1231 10:35:36.756024    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory
E1231 10:35:43.945512    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory
E1231 10:35:43.950857    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory
E1231 10:35:43.961160    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory
E1231 10:35:43.981463    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory
E1231 10:35:44.021851    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory
E1231 10:35:44.102161    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory
E1231 10:35:44.262616    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory
E1231 10:35:44.583351    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory
E1231 10:35:45.224031    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory
E1231 10:35:46.505175    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory
E1231 10:35:49.066135    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory
E1231 10:35:54.186509    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory
E1231 10:36:04.426779    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory
E1231 10:36:13.287461    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
E1231 10:36:13.450650    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory
E1231 10:36:13.456027    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory
E1231 10:36:13.466367    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory
E1231 10:36:13.486727    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory
E1231 10:36:13.527148    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory
E1231 10:36:13.607566    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory
E1231 10:36:13.768085    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory
E1231 10:36:14.088899    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory
E1231 10:36:14.729863    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory
E1231 10:36:16.010342    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory
E1231 10:36:18.571155    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory
E1231 10:36:23.691517    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory
E1231 10:36:24.907897    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory
E1231 10:36:33.931976    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory
E1231 10:36:54.412383    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory
E1231 10:36:59.313293    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory
E1231 10:37:05.868902    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory
E1231 10:37:35.373201    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (302.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20211231102953-6736 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-20211231102953-6736 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.1: exit status 80 (4m59.847173839s)

                                                
                                                
-- stdout --
	* [embed-certs-20211231102953-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node embed-certs-20211231102953-6736 in cluster embed-certs-20211231102953-6736
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	  - kubelet.global-housekeeping-interval=60m
	  - kubelet.housekeeping-interval=5m
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1231 10:29:53.376058  219726 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:29:53.376198  219726 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:29:53.376208  219726 out.go:310] Setting ErrFile to fd 2...
	I1231 10:29:53.376212  219726 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:29:53.376426  219726 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:29:53.376752  219726 out.go:304] Setting JSON to false
	I1231 10:29:53.378282  219726 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4348,"bootTime":1640942245,"procs":488,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:29:53.378376  219726 start.go:122] virtualization: kvm guest
	I1231 10:29:53.383065  219726 out.go:176] * [embed-certs-20211231102953-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:29:53.383303  219726 notify.go:174] Checking for updates...
	I1231 10:29:53.386360  219726 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:29:53.389529  219726 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:29:53.392775  219726 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:29:53.396943  219726 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:29:53.400842  219726 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:29:53.402334  219726 config.go:176] Loaded profile config "enable-default-cni-20211231101406-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:29:53.402563  219726 config.go:176] Loaded profile config "no-preload-20211231102928-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2-rc.0
	I1231 10:29:53.402745  219726 config.go:176] Loaded profile config "old-k8s-version-20211231102602-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I1231 10:29:53.402813  219726 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:29:53.467450  219726 docker.go:132] docker version: linux-20.10.12
	I1231 10:29:53.467589  219726 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:29:53.599848  219726 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:63 SystemTime:2021-12-31 10:29:53.51136652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:29:53.599963  219726 docker.go:237] overlay module found
	I1231 10:29:53.603460  219726 out.go:176] * Using the docker driver based on user configuration
	I1231 10:29:53.603503  219726 start.go:280] selected driver: docker
	I1231 10:29:53.603511  219726 start.go:795] validating driver "docker" against <nil>
	I1231 10:29:53.603539  219726 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:29:53.603568  219726 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:29:53.603576  219726 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:29:53.603623  219726 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:29:53.603650  219726 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I1231 10:29:53.606759  219726 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:29:53.607628  219726 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:29:53.741338  219726 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:63 SystemTime:2021-12-31 10:29:53.658332742 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:29:53.741508  219726 start_flags.go:284] no existing cluster config was found, will generate one from the flags 
	I1231 10:29:53.741739  219726 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1231 10:29:53.741778  219726 cni.go:93] Creating CNI manager for ""
	I1231 10:29:53.741791  219726 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:29:53.741811  219726 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1231 10:29:53.741822  219726 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1231 10:29:53.741831  219726 start_flags.go:293] Found "CNI" CNI - setting NetworkPlugin=cni
	I1231 10:29:53.741850  219726 start_flags.go:298] config:
	{Name:embed-certs-20211231102953-6736 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:29:53.745661  219726 out.go:176] * Starting control plane node embed-certs-20211231102953-6736 in cluster embed-certs-20211231102953-6736
	I1231 10:29:53.745720  219726 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:29:53.748154  219726 out.go:176] * Pulling base image ...
	I1231 10:29:53.748298  219726 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:29:53.748368  219726 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4
	I1231 10:29:53.748392  219726 cache.go:57] Caching tarball of preloaded images
	I1231 10:29:53.748460  219726 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:29:53.748799  219726 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:29:53.748829  219726 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.1 on containerd
	I1231 10:29:53.749003  219726 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/config.json ...
	I1231 10:29:53.749040  219726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/config.json: {Name:mk731142676845b023f187e596c264da3350559e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:29:53.796111  219726 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:29:53.796329  219726 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:29:53.796373  219726 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:29:53.796414  219726 start.go:313] acquiring machines lock for embed-certs-20211231102953-6736: {Name:mk30ade561e73ed15bb546a531be6f54b6b9c072 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:29:53.796785  219726 start.go:317] acquired machines lock for "embed-certs-20211231102953-6736" in 347.78µs
	I1231 10:29:53.796832  219726 start.go:89] Provisioning new machine with config: &{Name:embed-certs-20211231102953-6736 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker} &{Name: IP: Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:29:53.797021  219726 start.go:126] createHost starting for "" (driver="docker")
	I1231 10:29:53.802174  219726 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1231 10:29:53.802477  219726 start.go:160] libmachine.API.Create for "embed-certs-20211231102953-6736" (driver="docker")
	I1231 10:29:53.802527  219726 client.go:168] LocalClient.Create starting
	I1231 10:29:53.802626  219726 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem
	I1231 10:29:53.802661  219726 main.go:130] libmachine: Decoding PEM data...
	I1231 10:29:53.802679  219726 main.go:130] libmachine: Parsing certificate...
	I1231 10:29:53.802744  219726 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem
	I1231 10:29:53.802775  219726 main.go:130] libmachine: Decoding PEM data...
	I1231 10:29:53.802796  219726 main.go:130] libmachine: Parsing certificate...
	I1231 10:29:53.803411  219726 cli_runner.go:133] Run: docker network inspect embed-certs-20211231102953-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1231 10:29:53.852706  219726 cli_runner.go:180] docker network inspect embed-certs-20211231102953-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1231 10:29:53.852901  219726 network_create.go:254] running [docker network inspect embed-certs-20211231102953-6736] to gather additional debugging logs...
	I1231 10:29:53.852944  219726 cli_runner.go:133] Run: docker network inspect embed-certs-20211231102953-6736
	W1231 10:29:53.899784  219726 cli_runner.go:180] docker network inspect embed-certs-20211231102953-6736 returned with exit code 1
	I1231 10:29:53.899862  219726 network_create.go:257] error running [docker network inspect embed-certs-20211231102953-6736]: docker network inspect embed-certs-20211231102953-6736: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20211231102953-6736
	I1231 10:29:53.899886  219726 network_create.go:259] output of [docker network inspect embed-certs-20211231102953-6736]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20211231102953-6736
	
	** /stderr **
	I1231 10:29:53.899978  219726 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:29:53.957771  219726 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-689da033f191 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:51:2a:98:ff}}
	I1231 10:29:53.958723  219726 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc00049c208] misses:0}
	I1231 10:29:53.958763  219726 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1231 10:29:53.958781  219726 network_create.go:106] attempt to create docker network embed-certs-20211231102953-6736 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1231 10:29:53.958834  219726 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20211231102953-6736
	I1231 10:29:54.068631  219726 network_create.go:90] docker network embed-certs-20211231102953-6736 192.168.58.0/24 created
	I1231 10:29:54.068682  219726 kic.go:106] calculated static IP "192.168.58.2" for the "embed-certs-20211231102953-6736" container
	I1231 10:29:54.068742  219726 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I1231 10:29:54.122134  219726 cli_runner.go:133] Run: docker volume create embed-certs-20211231102953-6736 --label name.minikube.sigs.k8s.io=embed-certs-20211231102953-6736 --label created_by.minikube.sigs.k8s.io=true
	I1231 10:29:54.167960  219726 oci.go:102] Successfully created a docker volume embed-certs-20211231102953-6736
	I1231 10:29:54.168064  219726 cli_runner.go:133] Run: docker run --rm --name embed-certs-20211231102953-6736-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20211231102953-6736 --entrypoint /usr/bin/test -v embed-certs-20211231102953-6736:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib
	I1231 10:29:56.838780  219726 cli_runner.go:186] Completed: docker run --rm --name embed-certs-20211231102953-6736-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20211231102953-6736 --entrypoint /usr/bin/test -v embed-certs-20211231102953-6736:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib: (2.670669448s)
	I1231 10:29:56.838813  219726 oci.go:106] Successfully prepared a docker volume embed-certs-20211231102953-6736
	I1231 10:29:56.838860  219726 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:29:56.838891  219726 kic.go:179] Starting extracting preloaded images to volume ...
	I1231 10:29:56.838956  219726 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20211231102953-6736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir
	I1231 10:30:07.107722  219726 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20211231102953-6736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir: (10.268722151s)
	I1231 10:30:07.107761  219726 kic.go:188] duration metric: took 10.268868 seconds to extract preloaded images to volume
	W1231 10:30:07.107821  219726 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1231 10:30:07.107835  219726 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I1231 10:30:07.107917  219726 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1231 10:30:07.215154  219726 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-20211231102953-6736 --name embed-certs-20211231102953-6736 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20211231102953-6736 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-20211231102953-6736 --network embed-certs-20211231102953-6736 --ip 192.168.58.2 --volume embed-certs-20211231102953-6736:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b
	I1231 10:30:07.698249  219726 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Running}}
	I1231 10:30:07.747031  219726 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:30:07.800842  219726 cli_runner.go:133] Run: docker exec embed-certs-20211231102953-6736 stat /var/lib/dpkg/alternatives/iptables
	I1231 10:30:07.895797  219726 oci.go:175] the created container "embed-certs-20211231102953-6736" has a running status.
	I1231 10:30:07.895839  219726 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa...
	I1231 10:30:08.260906  219726 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1231 10:30:08.354826  219726 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:30:08.396121  219726 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1231 10:30:08.396141  219726 kic_runner.go:114] Args: [docker exec --privileged embed-certs-20211231102953-6736 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1231 10:30:08.501123  219726 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:30:08.547863  219726 machine.go:88] provisioning docker machine ...
	I1231 10:30:08.547908  219726 ubuntu.go:169] provisioning hostname "embed-certs-20211231102953-6736"
	I1231 10:30:08.547959  219726 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:30:08.591246  219726 main.go:130] libmachine: Using SSH client type: native
	I1231 10:30:08.591522  219726 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49397 <nil> <nil>}
	I1231 10:30:08.591549  219726 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20211231102953-6736 && echo "embed-certs-20211231102953-6736" | sudo tee /etc/hostname
	I1231 10:30:08.753114  219726 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20211231102953-6736
	
	I1231 10:30:08.753238  219726 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:30:08.809044  219726 main.go:130] libmachine: Using SSH client type: native
	I1231 10:30:08.809232  219726 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49397 <nil> <nil>}
	I1231 10:30:08.809259  219726 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20211231102953-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20211231102953-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20211231102953-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:30:08.957088  219726 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:30:08.957117  219726 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:30:08.957137  219726 ubuntu.go:177] setting up certificates
	I1231 10:30:08.957152  219726 provision.go:83] configureAuth start
	I1231 10:30:08.957216  219726 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211231102953-6736
	I1231 10:30:09.010233  219726 provision.go:138] copyHostCerts
	I1231 10:30:09.010303  219726 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:30:09.010315  219726 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:30:09.010395  219726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:30:09.010485  219726 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:30:09.010497  219726 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:30:09.010528  219726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:30:09.010590  219726 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:30:09.010597  219726 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:30:09.010624  219726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:30:09.010676  219726 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20211231102953-6736 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20211231102953-6736]
	I1231 10:30:09.283340  219726 provision.go:172] copyRemoteCerts
	I1231 10:30:09.283435  219726 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:30:09.283484  219726 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:30:09.337350  219726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49397 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:30:09.440676  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:30:09.463186  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I1231 10:30:09.485013  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1231 10:30:09.508442  219726 provision.go:86] duration metric: configureAuth took 551.277077ms
	I1231 10:30:09.508471  219726 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:30:09.508677  219726 config.go:176] Loaded profile config "embed-certs-20211231102953-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:30:09.508695  219726 machine.go:91] provisioned docker machine in 960.804304ms
	I1231 10:30:09.508703  219726 client.go:171] LocalClient.Create took 15.706168916s
	I1231 10:30:09.508724  219726 start.go:168] duration metric: libmachine.API.Create for "embed-certs-20211231102953-6736" took 15.706248821s
	I1231 10:30:09.508739  219726 start.go:267] post-start starting for "embed-certs-20211231102953-6736" (driver="docker")
	I1231 10:30:09.508745  219726 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:30:09.508811  219726 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:30:09.508887  219726 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:30:09.552519  219726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49397 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:30:09.653048  219726 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:30:09.656209  219726 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:30:09.656337  219726 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:30:09.656368  219726 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:30:09.656374  219726 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:30:09.656387  219726 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:30:09.656444  219726 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:30:09.656508  219726 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:30:09.656584  219726 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:30:09.664537  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:30:09.684913  219726 start.go:270] post-start completed in 176.156416ms
	I1231 10:30:09.685368  219726 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211231102953-6736
	I1231 10:30:09.730715  219726 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/config.json ...
	I1231 10:30:09.730996  219726 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:30:09.731047  219726 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:30:09.772008  219726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49397 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:30:09.874968  219726 start.go:129] duration metric: createHost completed in 16.077915345s
	I1231 10:30:09.875008  219726 start.go:80] releasing machines lock for "embed-certs-20211231102953-6736", held for 16.078206836s
	I1231 10:30:09.875092  219726 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211231102953-6736
	I1231 10:30:09.927414  219726 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:30:09.927420  219726 ssh_runner.go:195] Run: systemctl --version
	I1231 10:30:09.927485  219726 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:30:09.927564  219726 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:30:09.967583  219726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49397 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:30:09.979664  219726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49397 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:30:10.068879  219726 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:30:10.093867  219726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:30:10.106073  219726 docker.go:158] disabling docker service ...
	I1231 10:30:10.106133  219726 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:30:10.128894  219726 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:30:10.142227  219726 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:30:10.239117  219726 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:30:10.341506  219726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:30:10.353172  219726 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:30:10.369434  219726 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1
fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10
KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9
kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:30:10.390311  219726 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:30:10.409079  219726 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:30:10.421339  219726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:30:10.510170  219726 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:30:10.593325  219726 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:30:10.593473  219726 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:30:10.601186  219726 start.go:458] Will wait 60s for crictl version
	I1231 10:30:10.601328  219726 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:30:10.639912  219726 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:30:10Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:30:21.688378  219726 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:30:21.714013  219726 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:30:21.714074  219726 ssh_runner.go:195] Run: containerd --version
	I1231 10:30:21.734634  219726 ssh_runner.go:195] Run: containerd --version
	I1231 10:30:21.762709  219726 out.go:176] * Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	I1231 10:30:21.762821  219726 cli_runner.go:133] Run: docker network inspect embed-certs-20211231102953-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:30:21.799238  219726 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1231 10:30:21.803366  219726 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:30:21.818263  219726 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:30:21.820836  219726 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:30:21.823252  219726 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:30:21.823330  219726 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:30:21.823388  219726 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:30:21.849713  219726 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:30:21.849739  219726 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:30:21.849780  219726 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:30:21.877125  219726 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:30:21.877150  219726 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:30:21.877194  219726 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:30:21.905205  219726 cni.go:93] Creating CNI manager for ""
	I1231 10:30:21.905228  219726 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:30:21.905240  219726 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:30:21.905251  219726 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20211231102953-6736 NodeName:embed-certs-20211231102953-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:30:21.905384  219726 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20211231102953-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:30:21.905475  219726 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=embed-certs-20211231102953-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1231 10:30:21.905533  219726 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.1
	I1231 10:30:21.913446  219726 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:30:21.913513  219726 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:30:21.921381  219726 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (638 bytes)
	I1231 10:30:21.936402  219726 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1231 10:30:21.952335  219726 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2081 bytes)
	I1231 10:30:21.968888  219726 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:30:21.972604  219726 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:30:21.982979  219726 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736 for IP: 192.168.58.2
	I1231 10:30:21.983113  219726 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:30:21.983168  219726 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:30:21.983238  219726 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/client.key
	I1231 10:30:21.983257  219726 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/client.crt with IP's: []
	I1231 10:30:22.106776  219726 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/client.crt ...
	I1231 10:30:22.106805  219726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/client.crt: {Name:mk0e03868cc7fe1f3dfcf9da79e287f6262395fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:30:22.107019  219726 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/client.key ...
	I1231 10:30:22.107036  219726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/client.key: {Name:mk515384d2d137e364a1ad1b8d3ee128168482e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:30:22.107147  219726 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.key.cee25041
	I1231 10:30:22.107172  219726 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1231 10:30:22.221000  219726 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.crt.cee25041 ...
	I1231 10:30:22.221038  219726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.crt.cee25041: {Name:mk87e0f8df7efed1ae47a75dc9cd5f3b398499ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:30:22.221261  219726 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.key.cee25041 ...
	I1231 10:30:22.221281  219726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.key.cee25041: {Name:mk2f4f21430846cda040c7943b85a47bdc4f75ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:30:22.221397  219726 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.crt
	I1231 10:30:22.221476  219726 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.key
	I1231 10:30:22.221545  219726 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.key
	I1231 10:30:22.221585  219726 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.crt with IP's: []
	I1231 10:30:22.311158  219726 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.crt ...
	I1231 10:30:22.311192  219726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.crt: {Name:mkf2db88e9496502bef29f04821dba5ff38cf63b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:30:22.311420  219726 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.key ...
	I1231 10:30:22.311448  219726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.key: {Name:mk1243477406232b534e0778e1cb4e2252483e33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:30:22.312115  219726 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:30:22.312173  219726 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:30:22.312184  219726 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:30:22.312215  219726 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:30:22.312273  219726 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:30:22.312307  219726 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:30:22.312364  219726 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:30:22.314205  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:30:22.334696  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1231 10:30:22.354429  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:30:22.374158  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1231 10:30:22.394468  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:30:22.414684  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:30:22.434624  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:30:22.454604  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:30:22.474147  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:30:22.495690  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:30:22.516417  219726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:30:22.537174  219726 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:30:22.551766  219726 ssh_runner.go:195] Run: openssl version
	I1231 10:30:22.557176  219726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:30:22.566306  219726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:30:22.570734  219726 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:30:22.570792  219726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:30:22.576546  219726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:30:22.585864  219726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:30:22.595296  219726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:30:22.598833  219726 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:30:22.598899  219726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:30:22.605189  219726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:30:22.615268  219726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:30:22.625714  219726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:30:22.629277  219726 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:30:22.629338  219726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:30:22.634780  219726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:30:22.643631  219726 kubeadm.go:388] StartCluster: {Name:embed-certs-20211231102953-6736 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:30:22.643738  219726 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:30:22.643789  219726 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:30:22.672067  219726 cri.go:87] found id: ""
	I1231 10:30:22.672126  219726 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:30:22.680710  219726 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:30:22.689387  219726 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:30:22.689452  219726 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:30:22.698676  219726 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:30:22.698754  219726 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:30:38.173908  219726 out.go:203]   - Generating certificates and keys ...
	I1231 10:30:38.178218  219726 out.go:203]   - Booting up control plane ...
	I1231 10:30:38.181459  219726 out.go:203]   - Configuring RBAC rules ...
	I1231 10:30:38.183420  219726 cni.go:93] Creating CNI manager for ""
	I1231 10:30:38.183445  219726 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:30:38.185734  219726 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:30:38.185817  219726 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:30:38.190367  219726 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.1/kubectl ...
	I1231 10:30:38.190395  219726 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:30:38.205669  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:30:39.153015  219726 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:30:39.153139  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:39.153139  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=embed-certs-20211231102953-6736 minikube.k8s.io/updated_at=2021_12_31T10_30_39_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:39.281905  219726 ops.go:34] apiserver oom_adj: -16
	I1231 10:30:39.282099  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:39.845525  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:40.344860  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:40.845513  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:41.345034  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:41.845526  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:42.345451  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:42.845489  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:43.345140  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:43.844830  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:44.344899  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:44.845153  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:45.345227  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:45.845302  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:46.345154  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:46.845391  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:47.345449  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:47.845519  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:48.345688  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:48.845088  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:49.345699  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:49.845346  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:50.345578  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:50.845507  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:51.345522  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:51.845536  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:52.345479  219726 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:30:52.497499  219726 kubeadm.go:864] duration metric: took 13.34441416s to wait for elevateKubeSystemPrivileges.
	I1231 10:30:52.497540  219726 kubeadm.go:390] StartCluster complete in 29.853919823s
	I1231 10:30:52.497564  219726 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:30:52.497694  219726 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:30:52.500090  219726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:30:53.023863  219726 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20211231102953-6736" rescaled to 1
	I1231 10:30:53.023925  219726 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:30:53.026242  219726 out.go:176] * Verifying Kubernetes components...
	I1231 10:30:53.026320  219726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:30:53.023971  219726 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:30:53.024000  219726 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I1231 10:30:53.026419  219726 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20211231102953-6736"
	I1231 10:30:53.024386  219726 config.go:176] Loaded profile config "embed-certs-20211231102953-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:30:53.026452  219726 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20211231102953-6736"
	I1231 10:30:53.026467  219726 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20211231102953-6736"
	I1231 10:30:53.026447  219726 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20211231102953-6736"
	W1231 10:30:53.026534  219726 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:30:53.026572  219726 host.go:66] Checking if "embed-certs-20211231102953-6736" exists ...
	I1231 10:30:53.026871  219726 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:30:53.027068  219726 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:30:53.082105  219726 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:30:53.080041  219726 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20211231102953-6736"
	W1231 10:30:53.082139  219726 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:30:53.082173  219726 host.go:66] Checking if "embed-certs-20211231102953-6736" exists ...
	I1231 10:30:53.082262  219726 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:30:53.082283  219726 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:30:53.082340  219726 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:30:53.082693  219726 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:30:53.120606  219726 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:30:53.123669  219726 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20211231102953-6736" to be "Ready" ...
	I1231 10:30:53.134370  219726 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:30:53.134399  219726 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:30:53.134450  219726 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:30:53.137195  219726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49397 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:30:53.194741  219726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49397 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:30:53.291532  219726 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:30:53.398441  219726 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:30:53.591519  219726 start.go:773] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I1231 10:30:53.826074  219726 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I1231 10:30:53.826115  219726 addons.go:417] enableAddons completed in 802.130055ms
	I1231 10:30:55.133779  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:30:57.632633  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:30:59.633170  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:01.633442  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:04.133379  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:06.133435  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:08.133695  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:10.633248  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:13.132565  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:15.132605  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:17.133262  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:19.632987  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:22.133348  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:24.632671  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:26.633205  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:28.633264  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:30.633699  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:33.133382  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:35.633451  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:38.132846  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:40.633036  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:43.133572  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:45.633545  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:48.132504  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:50.134148  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:52.633270  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:54.633647  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:56.633946  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:31:59.133331  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:01.634066  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:04.133772  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:06.633293  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:08.634238  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:11.133255  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:13.133324  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:15.633602  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:18.132423  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:20.132556  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:22.132709  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:24.133212  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:26.134786  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:28.633546  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:30.634689  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:33.133528  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:35.633841  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:38.132849  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:40.133860  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:42.231855  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:44.633533  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:46.742665  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:49.132689  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:51.134269  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:53.633141  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:55.633730  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:32:58.132509  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:00.133347  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:02.632810  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:05.133374  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:07.134051  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:09.633210  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:12.133191  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:14.633509  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:16.636645  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:19.134286  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:21.137621  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:23.633065  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:25.633423  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:28.132967  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:30.133861  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:32.634046  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:35.136475  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:37.633120  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:40.132857  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:42.633001  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:44.634196  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:47.133032  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:49.133386  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:51.133671  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:53.632927  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:56.133138  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:58.633918  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:00.634303  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:03.134303  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:05.633463  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:08.133359  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:10.633648  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:13.133617  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:15.632937  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:18.132486  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:20.132812  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:22.133593  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:24.633597  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:27.134575  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:29.633090  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:31.633767  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:34.133525  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:36.632536  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:38.632694  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:40.634440  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:43.133415  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:45.633500  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:48.131867  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:50.132740  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:52.134471  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:53.136634  219726 node_ready.go:38] duration metric: took 4m0.01292471s waiting for node "embed-certs-20211231102953-6736" to be "Ready" ...
	I1231 10:34:53.140709  219726 out.go:176] 
	W1231 10:34:53.140941  219726 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:34:53.140961  219726 out.go:241] * 
	* 
	W1231 10:34:53.141727  219726 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1231 10:34:53.145349  219726 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p embed-certs-20211231102953-6736 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.1": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect embed-certs-20211231102953-6736
helpers_test.go:236: (dbg) docker inspect embed-certs-20211231102953-6736:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676",
	        "Created": "2021-12-31T10:30:07.254073431Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 220740,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-12-31T10:30:07.68623588Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/hostname",
	        "HostsPath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/hosts",
	        "LogPath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676-json.log",
	        "Name": "/embed-certs-20211231102953-6736",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-20211231102953-6736:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20211231102953-6736",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e-init/diff:/var/lib/docker/overlay2/c534a67c1f4c5b9f1d9e8fda7874029a0b48633454e02a0b6c810aa9cf70f36b/diff:/var/lib/docker/overlay2/de7a9fc119b7ade7c9f3b24bea5108d56a2268af67d230db0cccbbf8190cc502/diff:/var/lib/docker/overlay2/7dde3b10db81d42c3216201bd8676123e0e230bfa173b31e053af8fa206a800e/diff:/var/lib/docker/overlay2/97454ec819da74666dcc72f8e5b4653886c44f3699cecf13a018ecf95d935273/diff:/var/lib/docker/overlay2/f63f64cf22a9591cbcafeb284a265882ea890544f2b083b747047bcf47a45f11/diff:/var/lib/docker/overlay2/85a2123491b64d14f7493b3bc6a519d9df28cc3680976b9e5a7ecf783be5a304/diff:/var/lib/docker/overlay2/2ec15009b968007c3be12325edca4c12821b52fc616ace66d456038c7f9b8b26/diff:/var/lib/docker/overlay2/c45e9285ae7064da0be5f07d29b035ad54f64e529db00ffe6f2dce6ad543dccf/diff:/var/lib/docker/overlay2/d95000893ca52351a7804b110df43c1656b4bc7fbe4940c9e0ff2744233b1f93/diff:/var/lib/docker/overlay2/38e3bf
07bc49f88e1e7214644d31095a6178f5156c6a5ff3fbcaf8676336a52d/diff:/var/lib/docker/overlay2/fe45a9a5036ea98542c5fd7808e66f8b6ffe9d0c545de700f7cbe39d6280e0e3/diff:/var/lib/docker/overlay2/8cd90b043b30c66cbeacbe4ca21881d85717c6e16f1fe627e2eecdeaf461da6c/diff:/var/lib/docker/overlay2/7221b66a068ae65d9a1c089fe5c60e7230aa950b3d47eb7e90a1776c2a45931b/diff:/var/lib/docker/overlay2/86ce10ecfb90cfd8e1f7d722715353b823ddfa58b93eefa2c1efffff899916ee/diff:/var/lib/docker/overlay2/12ea1316fade88c6e1d2272d6dbeb629875ff96f4dd6f6ead0ff6a4e7e3a3067/diff:/var/lib/docker/overlay2/0fe276873166ac11302901f1bb2c547c71e23fac2197e717e789b6ce7d1eb3ce/diff:/var/lib/docker/overlay2/81bd863acb94ff576a1f1f046ee87693930a12209e915a09f2a720c77d1357f3/diff:/var/lib/docker/overlay2/8875d87537d51e137e2fbff41fe61dadeefa46057e0c93d257ec4754b5ba95f3/diff:/var/lib/docker/overlay2/86e75ae4c708ad2e78d21326ca3b7f552b7903d8c7a0a66e086feaacd3a8c002/diff:/var/lib/docker/overlay2/c688ba4459332409ab260296b96d566d186d307b81308fb3b59724fe6fed3c7d/diff:/var/lib/d
ocker/overlay2/45c4b946a887c9de4f592591b3e65041083baba75d5bed9b0995339de1cafd02/diff:/var/lib/docker/overlay2/3927af3e4dc5e4af94ac76cbe5a3b9759b80e10646aabc655f5c9b56ae13e650/diff:/var/lib/docker/overlay2/11dd3efafe062edcbd4dd837545d570514492786b08abf77f57fe300d225eaa5/diff:/var/lib/docker/overlay2/b8e0f4d5369eb0c633092d9e21501d99ccb27fafaad6667209e1d988c337ae9b/diff:/var/lib/docker/overlay2/c605476b22b48b266163ca6ee5fcb52b15458f1d7a77458390109b6e795f72f3/diff:/var/lib/docker/overlay2/19b31873fbc8858f6b143ea8dc98572f32110c70c29732285bdea89dd6163287/diff:/var/lib/docker/overlay2/171afcaaccc199dc3a653d1bf42453959e7c6bd7bc1e8340f94454b5f5fb6d2d/diff:/var/lib/docker/overlay2/2000b0fa8d671b6e34ec28228440eb7e6a5ddcd6b2cb657836d8522e3fb4454e/diff:/var/lib/docker/overlay2/1294bf56384681b45b611fe07d7b32c6c87a67eee568452d1f8123bf319f7a4c/diff:/var/lib/docker/overlay2/3199bb762e90968c1998250099f781c0e8954f4d62df39a80a384b9208149421/diff:/var/lib/docker/overlay2/036a54cd701c84ee783285b30f6cf33b92858f1a11ebd5db15cffffea12
adcd9/diff:/var/lib/docker/overlay2/552a166f9d1f0d79a7c45c08db35c6db9366e5d8e1321b62ffc3d8a9ea458797/diff:/var/lib/docker/overlay2/98f34e7af675026bd6dd9aaf43b996b5a992e04fe33f238d16d38e7ccad2fb91/diff:/var/lib/docker/overlay2/c1ed97ca4514bd37713c82daa6ba805285f12b5aa51599e9f915b8755fbf55a7/diff:/var/lib/docker/overlay2/86b8ba00550442fc509be298ff07205e99a171d56a70a2e1735cdfd4a9ab745e/diff:/var/lib/docker/overlay2/109092c1424dfee38a4eb63e39a9bb5d3de4b011f84e30bd021bf73ee2f47aa1/diff:/var/lib/docker/overlay2/87fd60ac9e5dd5a701def2c0a9e1c64463bd4779be06a990c86e5561a4349312/diff:/var/lib/docker/overlay2/145545da7fd578da8a41adec4cec3a9b347b4e609c900ee9365ae7735c7859c6/diff:/var/lib/docker/overlay2/e6a20d33de71269404b7fdc1fa6f915dcbc5a337e0625ef4536cb942bb55c19b/diff:/var/lib/docker/overlay2/38829872ccac03d0dae45b93d1f34e61fd179b6fc19f22fd27d4431ee773090f/diff:/var/lib/docker/overlay2/3c7fa7cc3994b36d6b984ebe7c46b535829f02814130486c4ecf5a0976f4f7dc/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20211231102953-6736",
	                "Source": "/var/lib/docker/volumes/embed-certs-20211231102953-6736/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20211231102953-6736",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20211231102953-6736",
	                "name.minikube.sigs.k8s.io": "embed-certs-20211231102953-6736",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e190deddeb5b1d7e9b4481ad93139648183971bf041d59445e4f831398786169",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49397"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49396"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49393"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49395"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49394"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e190deddeb5b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20211231102953-6736": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "de3bee7bab0c",
	                        "embed-certs-20211231102953-6736"
	                    ],
	                    "NetworkID": "821d0d66bcf3a6ca41969ece76bf8b556f86e66628fb90783541e59bdec0e994",
	                    "EndpointID": "493d10b1b399122713b7a745a90f22b6329f172b21c0ede79a67fa2664cc1302",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20211231102953-6736 -n embed-certs-20211231102953-6736

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20211231102953-6736 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-20211231102953-6736 logs -n 25: (1.182919332s)
helpers_test.go:253: TestStartStop/group/embed-certs/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                  Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | bridge-20211231101406-6736                                 | bridge-20211231101406-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:29:23 UTC | Fri, 31 Dec 2021 10:29:24 UTC |
	|         | logs -n 25                                                 |                                           |         |         |                               |                               |
	| delete  | -p bridge-20211231101406-6736                              | bridge-20211231101406-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:29:25 UTC | Fri, 31 Dec 2021 10:29:28 UTC |
	| -p      | calico-20211231101408-6736                                 | calico-20211231101408-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:29:46 UTC | Fri, 31 Dec 2021 10:29:47 UTC |
	|         | logs -n 25                                                 |                                           |         |         |                               |                               |
	| delete  | -p calico-20211231101408-6736                              | calico-20211231101408-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:29:48 UTC | Fri, 31 Dec 2021 10:29:53 UTC |
	| start   | -p no-preload-20211231102928-6736                          | no-preload-20211231102928-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:29:28 UTC | Fri, 31 Dec 2021 10:30:43 UTC |
	|         | --memory=2200 --alsologtostderr                            |                                           |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                           |         |         |                               |                               |
	|         | --driver=docker                                            |                                           |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                           |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | no-preload-20211231102928-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:30:52 UTC | Fri, 31 Dec 2021 10:30:52 UTC |
	|         | no-preload-20211231102928-6736                             |                                           |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                           |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                           |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736       | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:30:59 UTC | Fri, 31 Dec 2021 10:31:00 UTC |
	|         | logs -n 25                                                 |                                           |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20211231102928-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:30:53 UTC | Fri, 31 Dec 2021 10:31:13 UTC |
	|         | no-preload-20211231102928-6736                             |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                           |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20211231102928-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:31:13 UTC | Fri, 31 Dec 2021 10:31:13 UTC |
	|         | no-preload-20211231102928-6736                             |                                           |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                           |         |         |                               |                               |
	| start   | -p no-preload-20211231102928-6736                          | no-preload-20211231102928-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:31:13 UTC | Fri, 31 Dec 2021 10:32:11 UTC |
	|         | --memory=2200 --alsologtostderr                            |                                           |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                           |         |         |                               |                               |
	|         | --driver=docker                                            |                                           |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                           |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20211231102928-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:22 UTC | Fri, 31 Dec 2021 10:32:22 UTC |
	|         | no-preload-20211231102928-6736                             |                                           |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                           |         |         |                               |                               |
	| pause   | -p                                                         | no-preload-20211231102928-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:22 UTC | Fri, 31 Dec 2021 10:32:23 UTC |
	|         | no-preload-20211231102928-6736                             |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                           |         |         |                               |                               |
	| -p      | enable-default-cni-20211231101406-6736                     | enable-default-cni-20211231101406-6736    | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:24 UTC | Fri, 31 Dec 2021 10:32:25 UTC |
	|         | logs -n 25                                                 |                                           |         |         |                               |                               |
	| unpause | -p                                                         | no-preload-20211231102928-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:24 UTC | Fri, 31 Dec 2021 10:32:25 UTC |
	|         | no-preload-20211231102928-6736                             |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                           |         |         |                               |                               |
	| delete  | -p                                                         | enable-default-cni-20211231101406-6736    | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:26 UTC | Fri, 31 Dec 2021 10:32:29 UTC |
	|         | enable-default-cni-20211231101406-6736                     |                                           |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20211231102928-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:26 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | no-preload-20211231102928-6736                             |                                           |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20211231103229-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:29 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | disable-driver-mounts-20211231103229-6736                  |                                           |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20211231102928-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:30 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | no-preload-20211231102928-6736                             |                                           |         |         |                               |                               |
	| start   | -p newest-cni-20211231103230-6736 --memory=2200            | newest-cni-20211231103230-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:30 UTC | Fri, 31 Dec 2021 10:33:37 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                           |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                           |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                           |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                           |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                           |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20211231103230-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:37 UTC | Fri, 31 Dec 2021 10:33:38 UTC |
	|         | newest-cni-20211231103230-6736                             |                                           |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                           |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                           |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20211231103230-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:38 UTC | Fri, 31 Dec 2021 10:33:58 UTC |
	|         | newest-cni-20211231103230-6736                             |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                           |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20211231103230-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:58 UTC | Fri, 31 Dec 2021 10:33:58 UTC |
	|         | newest-cni-20211231103230-6736                             |                                           |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                           |         |         |                               |                               |
	| start   | -p newest-cni-20211231103230-6736 --memory=2200            | newest-cni-20211231103230-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:58 UTC | Fri, 31 Dec 2021 10:34:51 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                           |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                           |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                           |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                           |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                           |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20211231103230-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:52 UTC |
	|         | newest-cni-20211231103230-6736                             |                                           |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                           |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20211231103230-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:53 UTC |
	|         | newest-cni-20211231103230-6736                             |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                           |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/31 10:33:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.17.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1231 10:33:58.889763  239842 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:33:58.889968  239842 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:33:58.890015  239842 out.go:310] Setting ErrFile to fd 2...
	I1231 10:33:58.890028  239842 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:33:58.890301  239842 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:33:58.890755  239842 out.go:304] Setting JSON to false
	I1231 10:33:58.892928  239842 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4593,"bootTime":1640942245,"procs":596,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:33:58.893046  239842 start.go:122] virtualization: kvm guest
	I1231 10:33:58.896075  239842 out.go:176] * [newest-cni-20211231103230-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:33:58.898770  239842 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:33:58.896425  239842 notify.go:174] Checking for updates...
	I1231 10:33:58.901377  239842 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:33:58.904292  239842 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:33:58.906743  239842 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:33:58.909823  239842 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:33:58.911269  239842 config.go:176] Loaded profile config "newest-cni-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2-rc.0
	I1231 10:33:58.911745  239842 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:33:58.960055  239842 docker.go:132] docker version: linux-20.10.12
	I1231 10:33:58.960175  239842 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:33:59.061340  239842 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:49 SystemTime:2021-12-31 10:33:58.994194285 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:33:59.061470  239842 docker.go:237] overlay module found
	I1231 10:33:59.064676  239842 out.go:176] * Using the docker driver based on existing profile
	I1231 10:33:59.064715  239842 start.go:280] selected driver: docker
	I1231 10:33:59.064721  239842 start.go:795] validating driver "docker" against &{Name:newest-cni-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2-rc.0 ClusterName:newest-cni-20211231103230-6736 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.2-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.
io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:33:59.064864  239842 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:33:59.064877  239842 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:33:59.064882  239842 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:33:59.064913  239842 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:33:59.064992  239842 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:33:59.067375  239842 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:33:59.068079  239842 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:33:59.179516  239842 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:49 SystemTime:2021-12-31 10:33:59.103117577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	W1231 10:33:59.179717  239842 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:33:59.179756  239842 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:33:59.182917  239842 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:33:59.183064  239842 start_flags.go:829] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1231 10:33:59.183104  239842 cni.go:93] Creating CNI manager for ""
	I1231 10:33:59.183116  239842 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:33:59.183124  239842 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1231 10:33:59.183133  239842 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1231 10:33:59.183142  239842 start_flags.go:298] config:
	{Name:newest-cni-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2-rc.0 ClusterName:newest-cni-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.2-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries
:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:33:59.186972  239842 out.go:176] * Starting control plane node newest-cni-20211231103230-6736 in cluster newest-cni-20211231103230-6736
	I1231 10:33:59.187046  239842 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:33:59.189137  239842 out.go:176] * Pulling base image ...
	I1231 10:33:59.189233  239842 preload.go:132] Checking if preload exists for k8s version v1.23.2-rc.0 and runtime containerd
	I1231 10:33:59.189311  239842 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.2-rc.0-containerd-overlay2-amd64.tar.lz4
	I1231 10:33:59.189340  239842 cache.go:57] Caching tarball of preloaded images
	I1231 10:33:59.189397  239842 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:33:59.189738  239842 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.2-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:33:59.189762  239842 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.2-rc.0 on containerd
	I1231 10:33:59.189945  239842 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/config.json ...
	I1231 10:33:59.232549  239842 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:33:59.232610  239842 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:33:59.232633  239842 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:33:59.232677  239842 start.go:313] acquiring machines lock for newest-cni-20211231103230-6736: {Name:mkea4a41968f23a7f754ed1625a06fab4a3434ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:33:59.232826  239842 start.go:317] acquired machines lock for "newest-cni-20211231103230-6736" in 116.689µs
	I1231 10:33:59.232869  239842 start.go:93] Skipping create...Using existing machine configuration
	I1231 10:33:59.232883  239842 fix.go:55] fixHost starting: 
	I1231 10:33:59.233271  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:33:59.275589  239842 fix.go:108] recreateIfNeeded on newest-cni-20211231103230-6736: state=Stopped err=<nil>
	W1231 10:33:59.275624  239842 fix.go:134] unexpected machine state, will restart: <nil>
	I1231 10:33:57.611681  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:00.110928  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:33:58.633918  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:00.634303  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:03.134303  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:59.278797  239842 out.go:176] * Restarting existing docker container for "newest-cni-20211231103230-6736" ...
	I1231 10:33:59.278893  239842 cli_runner.go:133] Run: docker start newest-cni-20211231103230-6736
	I1231 10:33:59.808917  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:33:59.856009  239842 kic.go:420] container "newest-cni-20211231103230-6736" state is running.
	I1231 10:33:59.856565  239842 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20211231103230-6736
	I1231 10:33:59.904405  239842 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/config.json ...
	I1231 10:33:59.904680  239842 machine.go:88] provisioning docker machine ...
	I1231 10:33:59.904703  239842 ubuntu.go:169] provisioning hostname "newest-cni-20211231103230-6736"
	I1231 10:33:59.904740  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:33:59.947914  239842 main.go:130] libmachine: Using SSH client type: native
	I1231 10:33:59.948105  239842 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I1231 10:33:59.948124  239842 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20211231103230-6736 && echo "newest-cni-20211231103230-6736" | sudo tee /etc/hostname
	I1231 10:33:59.949036  239842 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47278->127.0.0.1:49417: read: connection reset by peer
	I1231 10:34:03.099652  239842 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20211231103230-6736
	
	I1231 10:34:03.099755  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:03.144059  239842 main.go:130] libmachine: Using SSH client type: native
	I1231 10:34:03.144255  239842 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I1231 10:34:03.144303  239842 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20211231103230-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20211231103230-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20211231103230-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:34:03.284958  239842 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:34:03.284998  239842 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:34:03.285062  239842 ubuntu.go:177] setting up certificates
	I1231 10:34:03.285076  239842 provision.go:83] configureAuth start
	I1231 10:34:03.285144  239842 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20211231103230-6736
	I1231 10:34:03.331319  239842 provision.go:138] copyHostCerts
	I1231 10:34:03.331385  239842 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:34:03.331393  239842 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:34:03.331460  239842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:34:03.331544  239842 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:34:03.331558  239842 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:34:03.331579  239842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:34:03.331625  239842 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:34:03.331638  239842 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:34:03.331657  239842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:34:03.331695  239842 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20211231103230-6736 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20211231103230-6736]
	I1231 10:34:03.586959  239842 provision.go:172] copyRemoteCerts
	I1231 10:34:03.587049  239842 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:34:03.587091  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:03.625102  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:03.720644  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:34:03.740753  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I1231 10:34:03.760615  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1231 10:34:03.780213  239842 provision.go:86] duration metric: configureAuth took 495.114028ms
	I1231 10:34:03.780262  239842 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:34:03.780481  239842 config.go:176] Loaded profile config "newest-cni-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2-rc.0
	I1231 10:34:03.780495  239842 machine.go:91] provisioned docker machine in 3.875801286s
	I1231 10:34:03.780501  239842 start.go:267] post-start starting for "newest-cni-20211231103230-6736" (driver="docker")
	I1231 10:34:03.780506  239842 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:34:03.780545  239842 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:34:03.780586  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:02.111051  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:04.112682  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:05.633463  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:08.133359  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:03.820417  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:03.925875  239842 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:34:03.930813  239842 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:34:03.930851  239842 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:34:03.930864  239842 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:34:03.930872  239842 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:34:03.930885  239842 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:34:03.930949  239842 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:34:03.931028  239842 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:34:03.931126  239842 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:34:03.940439  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:34:03.962877  239842 start.go:270] post-start completed in 182.361292ms
	I1231 10:34:03.962946  239842 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:34:03.962978  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:04.003202  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:04.101581  239842 fix.go:57] fixHost completed within 4.868692803s
	I1231 10:34:04.101619  239842 start.go:80] releasing machines lock for "newest-cni-20211231103230-6736", held for 4.868770707s
	I1231 10:34:04.101715  239842 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20211231103230-6736
	I1231 10:34:04.141381  239842 ssh_runner.go:195] Run: systemctl --version
	I1231 10:34:04.141446  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:04.141386  239842 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:34:04.141549  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:04.184922  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:04.186345  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:04.300405  239842 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:34:04.314857  239842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:34:04.326678  239842 docker.go:158] disabling docker service ...
	I1231 10:34:04.326732  239842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:34:04.338952  239842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:34:04.350188  239842 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:34:04.441879  239842 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:34:04.523149  239842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:34:04.533774  239842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:34:04.548558  239842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:34:04.563078  239842 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:34:04.570184  239842 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:34:04.577904  239842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:34:04.661512  239842 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:34:04.743598  239842 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:34:04.743665  239842 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:34:04.748322  239842 start.go:458] Will wait 60s for crictl version
	I1231 10:34:04.748376  239842 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:34:04.776556  239842 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:34:04Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:34:06.611941  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:09.111247  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:10.633648  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:13.133617  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:11.111440  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:13.111520  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:15.111758  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:15.824426  239842 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:34:15.852646  239842 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:34:15.852709  239842 ssh_runner.go:195] Run: containerd --version
	I1231 10:34:15.875454  239842 ssh_runner.go:195] Run: containerd --version
	I1231 10:34:15.899983  239842 out.go:176] * Preparing Kubernetes v1.23.2-rc.0 on containerd 1.4.12 ...
	I1231 10:34:15.900089  239842 cli_runner.go:133] Run: docker network inspect newest-cni-20211231103230-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:34:15.938827  239842 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1231 10:34:15.942919  239842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:34:15.956827  239842 out.go:176]   - kubelet.network-plugin=cni
	I1231 10:34:15.959219  239842 out.go:176]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I1231 10:34:15.961312  239842 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:34:15.963727  239842 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:34:15.632937  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:18.132486  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:15.966056  239842 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:34:15.966172  239842 preload.go:132] Checking if preload exists for k8s version v1.23.2-rc.0 and runtime containerd
	I1231 10:34:15.966243  239842 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:34:15.995265  239842 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:34:15.995290  239842 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:34:15.995331  239842 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:34:16.022435  239842 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:34:16.022458  239842 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:34:16.022504  239842 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:34:16.051296  239842 cni.go:93] Creating CNI manager for ""
	I1231 10:34:16.051321  239842 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:34:16.051335  239842 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I1231 10:34:16.051348  239842 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.2-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20211231103230-6736 NodeName:newest-cni-20211231103230-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-
elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:34:16.051545  239842 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20211231103230-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.2-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:34:16.051662  239842 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.2-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --global-housekeeping-interval=60m --hostname-override=newest-cni-20211231103230-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.2-rc.0 ClusterName:newest-cni-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1231 10:34:16.051732  239842 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2-rc.0
	I1231 10:34:16.059993  239842 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:34:16.060063  239842 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:34:16.068090  239842 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (679 bytes)
	I1231 10:34:16.083548  239842 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1231 10:34:16.098901  239842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1231 10:34:16.116150  239842 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:34:16.119950  239842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:34:16.130712  239842 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736 for IP: 192.168.76.2
	I1231 10:34:16.130826  239842 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:34:16.130889  239842 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:34:16.130980  239842 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/client.key
	I1231 10:34:16.131059  239842 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/apiserver.key.31bdca25
	I1231 10:34:16.131233  239842 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/proxy-client.key
	I1231 10:34:16.131373  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:34:16.131415  239842 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:34:16.131431  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:34:16.131463  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:34:16.131498  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:34:16.131533  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:34:16.131586  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:34:16.132861  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:34:16.154694  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1231 10:34:16.176000  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:34:16.199525  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1231 10:34:16.222324  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:34:16.245022  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:34:16.269767  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:34:16.296529  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:34:16.320759  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:34:16.344786  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:34:16.368318  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:34:16.389773  239842 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:34:16.405384  239842 ssh_runner.go:195] Run: openssl version
	I1231 10:34:16.411011  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:34:16.419791  239842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:34:16.423595  239842 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:34:16.423671  239842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:34:16.429324  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:34:16.437405  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:34:16.446166  239842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:34:16.449927  239842 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:34:16.450001  239842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:34:16.455858  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:34:16.464618  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:34:16.474030  239842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:34:16.478705  239842 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:34:16.478802  239842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:34:16.485176  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:34:16.493913  239842 kubeadm.go:388] StartCluster: {Name:newest-cni-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2-rc.0 ClusterName:newest-cni-20211231103230-6736 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.2-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 Me
tricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:34:16.494024  239842 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:34:16.494078  239842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:34:16.524647  239842 cri.go:87] found id: "155c2bf79c8bd8b8bb6dbeb56755a02c58b466899f6a9e677748a3b0d10686ed"
	I1231 10:34:16.524684  239842 cri.go:87] found id: "8d8c36be3cd9499af063e0e758f58669f30330492f2849aca5442c85468a63bd"
	I1231 10:34:16.524693  239842 cri.go:87] found id: "24d8438942a01eb4000995dcc71ad4b52b67206a5f4af1954e644740df495c62"
	I1231 10:34:16.524705  239842 cri.go:87] found id: "3c9717f9388efe5a2acdad99a248f83aeb684c526c2e02b91a89fd56616cb240"
	I1231 10:34:16.524711  239842 cri.go:87] found id: "f3c21531b1b800501ebef6dcb786a58ec6fe912c6da1b160f1d0589524631a5f"
	I1231 10:34:16.524717  239842 cri.go:87] found id: "f1cfa511a7febb69b44502241a766bc5f1da2755e54f022d08281b3b3c4551ee"
	I1231 10:34:16.524722  239842 cri.go:87] found id: ""
	I1231 10:34:16.524779  239842 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1231 10:34:16.541313  239842 cri.go:114] JSON = null
	W1231 10:34:16.541371  239842 kubeadm.go:395] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I1231 10:34:16.541451  239842 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:34:16.549111  239842 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1231 10:34:16.556666  239842 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:16.557897  239842 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20211231103230-6736" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:34:16.558554  239842 kubeconfig.go:127] "newest-cni-20211231103230-6736" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig - will repair!
	I1231 10:34:16.559659  239842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:34:16.562421  239842 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1231 10:34:16.570276  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:16.570337  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:16.585156  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:16.785503  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:16.785582  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:16.803739  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:16.986016  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:16.986082  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:17.003346  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:17.185436  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:17.185522  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:17.201434  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:17.385753  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:17.385834  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:17.401272  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:17.585464  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:17.585557  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:17.602009  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:17.786319  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:17.786390  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:17.801129  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:17.985405  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:17.985492  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:18.002220  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:18.185431  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:18.185522  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:18.200513  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:18.385798  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:18.385886  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:18.404508  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:18.585753  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:18.585841  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:18.601638  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:18.785922  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:18.786018  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1231 10:34:17.611312  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:20.111421  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	W1231 10:34:18.803740  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:18.985967  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:18.986067  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:19.002808  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:19.186110  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:19.186208  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:19.202949  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:19.386242  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:19.386333  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:19.403965  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:19.586311  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:19.586409  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:19.602884  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:19.602907  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:19.602960  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:19.619190  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W1231 10:34:19.619231  239842 kubeadm.go:600] needs reconfigure: apiserver error: timed out waiting for the condition
	I1231 10:34:19.619258  239842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2-rc.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1231 10:34:20.346752  239842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:34:20.358199  239842 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:34:20.367408  239842 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:34:20.367462  239842 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:34:20.377071  239842 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:34:20.377137  239842 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:34:20.729691  239842 out.go:203]   - Generating certificates and keys ...
	I1231 10:34:20.132812  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:22.133593  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:21.380175  239842 out.go:203]   - Booting up control plane ...
	I1231 10:34:22.111520  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:24.613409  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:24.633597  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:27.134575  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:27.111697  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:29.610920  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:29.633090  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:31.633767  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:34.434058  239842 out.go:203]   - Configuring RBAC rules ...
	I1231 10:34:34.849120  239842 cni.go:93] Creating CNI manager for ""
	I1231 10:34:34.849163  239842 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:34:31.611280  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:34.111498  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:34.133525  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:36.632536  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:34.852842  239842 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:34:34.852967  239842 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:34:34.857479  239842 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl ...
	I1231 10:34:34.857506  239842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:34:34.874366  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:34:35.630846  239842 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:34:35.630937  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:35.630961  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=newest-cni-20211231103230-6736 minikube.k8s.io/updated_at=2021_12_31T10_34_35_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:35.651672  239842 ops.go:34] apiserver oom_adj: -16
	I1231 10:34:35.719909  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:36.316492  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:36.815787  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:37.316079  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:37.816415  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:38.316091  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:36.610865  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:38.611394  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:38.632694  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:40.634440  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:43.133415  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:38.815995  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:39.316530  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:39.816139  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:40.315766  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:40.815697  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:41.316307  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:41.816101  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:42.316374  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:42.816315  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:43.316484  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:40.611615  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:42.612982  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:45.111363  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:45.633500  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:48.131867  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:43.816729  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:44.316037  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:44.815895  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:45.317082  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:45.816101  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:46.315798  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:46.815886  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:47.316269  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:47.815962  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:48.003966  239842 kubeadm.go:864] duration metric: took 12.373088498s to wait for elevateKubeSystemPrivileges.
	I1231 10:34:48.003999  239842 kubeadm.go:390] StartCluster complete in 31.510097342s
	I1231 10:34:48.004022  239842 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:34:48.004121  239842 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:34:48.005914  239842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:34:48.526056  239842 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20211231103230-6736" rescaled to 1
	I1231 10:34:48.526130  239842 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.2-rc.0 ControlPlane:true Worker:true}
	I1231 10:34:48.526152  239842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:34:48.528894  239842 out.go:176] * Verifying Kubernetes components...
	I1231 10:34:48.528963  239842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:34:48.526213  239842 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I1231 10:34:48.529044  239842 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20211231103230-6736"
	I1231 10:34:48.529069  239842 addons.go:65] Setting dashboard=true in profile "newest-cni-20211231103230-6736"
	I1231 10:34:48.529085  239842 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20211231103230-6736"
	I1231 10:34:48.529105  239842 addons.go:65] Setting metrics-server=true in profile "newest-cni-20211231103230-6736"
	I1231 10:34:48.529112  239842 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20211231103230-6736"
	I1231 10:34:48.529125  239842 addons.go:153] Setting addon metrics-server=true in "newest-cni-20211231103230-6736"
	W1231 10:34:48.529134  239842 addons.go:165] addon metrics-server should already be in state true
	I1231 10:34:48.529072  239842 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20211231103230-6736"
	I1231 10:34:48.529172  239842 host.go:66] Checking if "newest-cni-20211231103230-6736" exists ...
	W1231 10:34:48.529178  239842 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:34:48.529204  239842 host.go:66] Checking if "newest-cni-20211231103230-6736" exists ...
	I1231 10:34:48.529086  239842 addons.go:153] Setting addon dashboard=true in "newest-cni-20211231103230-6736"
	W1231 10:34:48.529469  239842 addons.go:165] addon dashboard should already be in state true
	I1231 10:34:48.529489  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:34:48.529507  239842 host.go:66] Checking if "newest-cni-20211231103230-6736" exists ...
	I1231 10:34:48.529669  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:34:48.529673  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:34:48.526424  239842 config.go:176] Loaded profile config "newest-cni-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2-rc.0
	I1231 10:34:48.530032  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:34:48.586829  239842 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1231 10:34:48.590085  239842 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1231 10:34:48.589138  239842 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20211231103230-6736"
	W1231 10:34:48.590194  239842 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:34:48.590280  239842 host.go:66] Checking if "newest-cni-20211231103230-6736" exists ...
	I1231 10:34:48.590345  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1231 10:34:48.590360  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1231 10:34:48.590419  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:48.593339  239842 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1231 10:34:48.593453  239842 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1231 10:34:48.593465  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1231 10:34:48.593553  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:48.591124  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:34:48.600479  239842 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:34:48.601216  239842 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:34:48.601358  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:34:48.601494  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:48.659910  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:48.665029  239842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:34:48.665195  239842 api_server.go:51] waiting for apiserver process to appear ...
	I1231 10:34:48.665258  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1231 10:34:48.667125  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:48.679789  239842 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:34:48.679853  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:34:48.679963  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:48.686805  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:48.739096  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:47.111713  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:49.112487  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:48.902948  239842 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1231 10:34:48.902984  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1231 10:34:48.903155  239842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:34:48.903256  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1231 10:34:48.903281  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1231 10:34:49.003786  239842 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1231 10:34:49.003825  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1231 10:34:49.006836  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1231 10:34:49.006870  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1231 10:34:49.007822  239842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:34:49.095461  239842 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:34:49.095567  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1231 10:34:49.102715  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1231 10:34:49.102747  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1231 10:34:49.206111  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1231 10:34:49.206155  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1231 10:34:49.207830  239842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:34:49.384988  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1231 10:34:49.385020  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1231 10:34:49.392682  239842 api_server.go:71] duration metric: took 866.519409ms to wait for apiserver process to appear ...
	I1231 10:34:49.392800  239842 api_server.go:87] waiting for apiserver healthz status ...
	I1231 10:34:49.392828  239842 start.go:773] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I1231 10:34:49.392831  239842 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1231 10:34:49.403071  239842 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1231 10:34:49.404217  239842 api_server.go:140] control plane version: v1.23.2-rc.0
	I1231 10:34:49.404287  239842 api_server.go:130] duration metric: took 11.464686ms to wait for apiserver health ...
	I1231 10:34:49.404308  239842 system_pods.go:43] waiting for kube-system pods to appear ...
	I1231 10:34:49.488542  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1231 10:34:49.488611  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1231 10:34:49.497619  239842 system_pods.go:59] 7 kube-system pods found
	I1231 10:34:49.497777  239842 system_pods.go:61] "coredns-64897985d-fh6sl" [f7a107d1-df7c-4b28-8f0d-eb5e6da38e4f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I1231 10:34:49.497853  239842 system_pods.go:61] "etcd-newest-cni-20211231103230-6736" [9212b001-2098-4378-b03f-05510269335f] Running
	I1231 10:34:49.497879  239842 system_pods.go:61] "kindnet-tkvfw" [4abcfbc0-b4e3-41f0-89d4-26c9a356f41e] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1231 10:34:49.497904  239842 system_pods.go:61] "kube-apiserver-newest-cni-20211231103230-6736" [a37117a3-3753-4e5b-b2f3-5d129612ee51] Running
	I1231 10:34:49.497934  239842 system_pods.go:61] "kube-controller-manager-newest-cni-20211231103230-6736" [bae63363-f3ff-4de8-9600-07baeb9a1915] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1231 10:34:49.497959  239842 system_pods.go:61] "kube-proxy-228gt" [8d98d417-a803-474a-b07d-aa7c25391bd9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1231 10:34:49.497987  239842 system_pods.go:61] "kube-scheduler-newest-cni-20211231103230-6736" [1a0e5162-834e-4b7f-815a-66f6b1511153] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1231 10:34:49.498011  239842 system_pods.go:74] duration metric: took 93.696354ms to wait for pod list to return data ...
	I1231 10:34:49.498038  239842 default_sa.go:34] waiting for default service account to be created ...
	I1231 10:34:49.501869  239842 default_sa.go:45] found service account: "default"
	I1231 10:34:49.501947  239842 default_sa.go:55] duration metric: took 3.889685ms for default service account to be created ...
	I1231 10:34:49.501972  239842 kubeadm.go:542] duration metric: took 975.813843ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1231 10:34:49.502004  239842 node_conditions.go:102] verifying NodePressure condition ...
	I1231 10:34:49.584377  239842 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I1231 10:34:49.584456  239842 node_conditions.go:123] node cpu capacity is 8
	I1231 10:34:49.584501  239842 node_conditions.go:105] duration metric: took 82.480868ms to run NodePressure ...
	I1231 10:34:49.584516  239842 start.go:211] waiting for startup goroutines ...
	I1231 10:34:49.590387  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1231 10:34:49.590472  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1231 10:34:49.687079  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1231 10:34:49.687111  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1231 10:34:49.786712  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:34:49.786813  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1231 10:34:49.897022  239842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:34:50.206491  239842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.303301914s)
	I1231 10:34:50.206645  239842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.198797907s)
	I1231 10:34:50.596403  239842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.388529563s)
	I1231 10:34:50.596444  239842 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20211231103230-6736"
	I1231 10:34:51.611369  239842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.714292146s)
	I1231 10:34:51.613756  239842 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1231 10:34:51.613803  239842 addons.go:417] enableAddons completed in 3.087611947s
	I1231 10:34:51.651587  239842 start.go:493] kubectl: 1.23.1, cluster: 1.23.2-rc.0 (minor skew: 0)
	I1231 10:34:51.654312  239842 out.go:176] * Done! kubectl is now configured to use "newest-cni-20211231103230-6736" cluster and "default" namespace by default
	I1231 10:34:50.132740  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:52.134471  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:53.136634  219726 node_ready.go:38] duration metric: took 4m0.01292471s waiting for node "embed-certs-20211231102953-6736" to be "Ready" ...
	I1231 10:34:53.140709  219726 out.go:176] 
	W1231 10:34:53.140941  219726 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:34:53.140961  219726 out.go:241] * 
	W1231 10:34:53.141727  219726 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	7ea8e6db9c1e8       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   44dd51a088ce3
	a0dcfaded2798       6de166512aa22       4 minutes ago        Exited              kindnet-cni               0                   44dd51a088ce3
	a380c0d98153c       b46c42588d511       4 minutes ago        Running             kube-proxy                0                   835b47ba02211
	7de9215e7da17       25f8c7f3da61c       4 minutes ago        Running             etcd                      0                   e2c8947bfc291
	bd3c847642a9f       f51846a4fd288       4 minutes ago        Running             kube-controller-manager   0                   ea2c5e2434c34
	4d064efc3679b       71d575efe6283       4 minutes ago        Running             kube-scheduler            0                   c1141691949f5
	eb97b3087125d       b6d7abedde399       4 minutes ago        Running             kube-apiserver            0                   0d2541a270208
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-12-31 10:30:08 UTC, end at Fri 2021-12-31 10:34:54 UTC. --
	Dec 31 10:30:31 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:30:31.716559658Z" level=info msg="StartContainer for \"4d064efc3679beff93c0b83a48f6fdc82cb98e282e856beb780502ca855801a3\" returns successfully"
	Dec 31 10:30:51 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:30:51.230765345Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Dec 31 10:30:51 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:30:51.979086400Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-2gpsc,Uid:1c5247f0-9b6e-4b7c-9325-0f80e9697124,Namespace:kube-system,Attempt:0,}"
	Dec 31 10:30:51 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:30:51.979967466Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-jfhh7,Uid:1b899ed8-2aac-4758-9989-38222ff6eb2f,Namespace:kube-system,Attempt:0,}"
	Dec 31 10:30:52 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:30:52.024300490Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/44dd51a088ce33bf525525cb8330710d2c26acdbd4f05e817316f4c87dd3a31a pid=1694
	Dec 31 10:30:52 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:30:52.026865671Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/835b47ba022111f96a6650d241f774d3e14b6a38031a643ec71e14ead7708fdd pid=1706
	Dec 31 10:30:52 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:30:52.190116208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jfhh7,Uid:1b899ed8-2aac-4758-9989-38222ff6eb2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"835b47ba022111f96a6650d241f774d3e14b6a38031a643ec71e14ead7708fdd\""
	Dec 31 10:30:52 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:30:52.193405692Z" level=info msg="CreateContainer within sandbox \"835b47ba022111f96a6650d241f774d3e14b6a38031a643ec71e14ead7708fdd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Dec 31 10:30:52 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:30:52.238282936Z" level=info msg="CreateContainer within sandbox \"835b47ba022111f96a6650d241f774d3e14b6a38031a643ec71e14ead7708fdd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a380c0d98153cc2ab6095cc8b6eaf0192543775721df6c9cb6c5e4e510d1636d\""
	Dec 31 10:30:52 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:30:52.239785373Z" level=info msg="StartContainer for \"a380c0d98153cc2ab6095cc8b6eaf0192543775721df6c9cb6c5e4e510d1636d\""
	Dec 31 10:30:52 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:30:52.336434387Z" level=info msg="StartContainer for \"a380c0d98153cc2ab6095cc8b6eaf0192543775721df6c9cb6c5e4e510d1636d\" returns successfully"
	Dec 31 10:30:52 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:30:52.488800361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-2gpsc,Uid:1c5247f0-9b6e-4b7c-9325-0f80e9697124,Namespace:kube-system,Attempt:0,} returns sandbox id \"44dd51a088ce33bf525525cb8330710d2c26acdbd4f05e817316f4c87dd3a31a\""
	Dec 31 10:30:52 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:30:52.495281937Z" level=info msg="CreateContainer within sandbox \"44dd51a088ce33bf525525cb8330710d2c26acdbd4f05e817316f4c87dd3a31a\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Dec 31 10:30:52 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:30:52.525057774Z" level=info msg="CreateContainer within sandbox \"44dd51a088ce33bf525525cb8330710d2c26acdbd4f05e817316f4c87dd3a31a\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"a0dcfaded27988532fd2f6cf67643d7ef3aa93cb98da8884fe6d6e539438d20a\""
	Dec 31 10:30:52 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:30:52.525738438Z" level=info msg="StartContainer for \"a0dcfaded27988532fd2f6cf67643d7ef3aa93cb98da8884fe6d6e539438d20a\""
	Dec 31 10:30:52 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:30:52.984739617Z" level=info msg="StartContainer for \"a0dcfaded27988532fd2f6cf67643d7ef3aa93cb98da8884fe6d6e539438d20a\" returns successfully"
	Dec 31 10:33:33 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:33:33.310072479Z" level=info msg="Finish piping stdout of container \"a0dcfaded27988532fd2f6cf67643d7ef3aa93cb98da8884fe6d6e539438d20a\""
	Dec 31 10:33:33 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:33:33.310150362Z" level=info msg="Finish piping stderr of container \"a0dcfaded27988532fd2f6cf67643d7ef3aa93cb98da8884fe6d6e539438d20a\""
	Dec 31 10:33:33 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:33:33.311526990Z" level=info msg="TaskExit event &TaskExit{ContainerID:a0dcfaded27988532fd2f6cf67643d7ef3aa93cb98da8884fe6d6e539438d20a,ID:a0dcfaded27988532fd2f6cf67643d7ef3aa93cb98da8884fe6d6e539438d20a,Pid:1876,ExitStatus:2,ExitedAt:2021-12-31 10:33:33.311123388 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:33:33 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:33:33.351461348Z" level=info msg="shim disconnected" id=a0dcfaded27988532fd2f6cf67643d7ef3aa93cb98da8884fe6d6e539438d20a
	Dec 31 10:33:33 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:33:33.352550035Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:33:33 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:33:33.706926270Z" level=info msg="CreateContainer within sandbox \"44dd51a088ce33bf525525cb8330710d2c26acdbd4f05e817316f4c87dd3a31a\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Dec 31 10:33:33 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:33:33.732850781Z" level=info msg="CreateContainer within sandbox \"44dd51a088ce33bf525525cb8330710d2c26acdbd4f05e817316f4c87dd3a31a\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"7ea8e6db9c1e87be1aa50758a859dff16936dc01f47b666335efb3ef53644353\""
	Dec 31 10:33:33 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:33:33.733440645Z" level=info msg="StartContainer for \"7ea8e6db9c1e87be1aa50758a859dff16936dc01f47b666335efb3ef53644353\""
	Dec 31 10:33:33 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:33:33.985110019Z" level=info msg="StartContainer for \"7ea8e6db9c1e87be1aa50758a859dff16936dc01f47b666335efb3ef53644353\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20211231102953-6736
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20211231102953-6736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb
	                    minikube.k8s.io/name=embed-certs-20211231102953-6736
	                    minikube.k8s.io/updated_at=2021_12_31T10_30_39_0700
	                    minikube.k8s.io/version=v1.24.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 Dec 2021 10:30:35 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20211231102953-6736
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 Dec 2021 10:34:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 Dec 2021 10:30:51 +0000   Fri, 31 Dec 2021 10:30:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 Dec 2021 10:30:51 +0000   Fri, 31 Dec 2021 10:30:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 Dec 2021 10:30:51 +0000   Fri, 31 Dec 2021 10:30:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 31 Dec 2021 10:30:51 +0000   Fri, 31 Dec 2021 10:30:32 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20211231102953-6736
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	  System UUID:                df6948c7-cd35-4573-a0b7-f7c0ae501659
	  Boot ID:                    e6f4bca3-a61e-4e63-a68d-0c73a5c2c999
	  Kernel Version:             5.11.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.12
	  Kubelet Version:            v1.23.1
	  Kube-Proxy Version:         v1.23.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20211231102953-6736                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m11s
	  kube-system                 kindnet-2gpsc                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m3s
	  kube-system                 kube-apiserver-embed-certs-20211231102953-6736             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 kube-controller-manager-embed-certs-20211231102953-6736    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 kube-proxy-jfhh7                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-embed-certs-20211231102953-6736             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 4m2s                   kube-proxy  
	  Normal  Starting                 4m24s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m24s (x3 over 4m24s)  kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m24s (x3 over 4m24s)  kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m24s (x2 over 4m24s)  kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m24s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 4m11s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m11s                  kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s                  kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s                  kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m11s                  kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 da bc c6 d3 d8 08 06
	[  +0.905080] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vetha337dd59
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 d5 c5 d8 9b b7 08 06
	[  +0.767220] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethad0df3e8
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 45 74 46 b2 ed 08 06
	[  +0.175235] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethfa30cc3c
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 f5 8d 27 39 52 08 06
	[  +2.267937] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.022406] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.024047] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.951751] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.011878] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023951] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.963808] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.003832] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023918] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	
	* 
	* ==> etcd [7de9215e7da17128bb40a41f2f520ecf92f3eb1e98a7c3510a228471a97c4f2f] <==
	* {"level":"info","ts":"2021-12-31T10:30:32.208Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2021-12-31T10:30:32.208Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2021-12-31T10:30:32.208Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2021-12-31T10:30:32.208Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:embed-certs-20211231102953-6736 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2021-12-31T10:30:32.209Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:30:32.209Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:30:32.209Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-12-31T10:30:32.209Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-12-31T10:30:32.209Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:30:32.210Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:30:32.210Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:30:32.210Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:30:32.210Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2021-12-31T10:30:32.211Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2021-12-31T10:32:36.853Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.318119ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3238508214417130916 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.58.2\" mod_revision:509 > success:<request_put:<key:\"/registry/masterleases/192.168.58.2\" value_size:67 lease:3238508214417130914 >> failure:<request_range:<key:\"/registry/masterleases/192.168.58.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2021-12-31T10:32:36.853Z","caller":"traceutil/trace.go:171","msg":"trace[1858985268] transaction","detail":"{read_only:false; response_revision:511; number_of_response:1; }","duration":"156.695597ms","start":"2021-12-31T10:32:36.696Z","end":"2021-12-31T10:32:36.853Z","steps":["trace[1858985268] 'process raft request'  (duration: 53.739286ms)","trace[1858985268] 'compare'  (duration: 102.215258ms)"],"step_count":2}
	{"level":"warn","ts":"2021-12-31T10:32:37.266Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"133.925799ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-20211231102953-6736\" ","response":"range_response_count:1 size:4942"}
	{"level":"info","ts":"2021-12-31T10:32:37.266Z","caller":"traceutil/trace.go:171","msg":"trace[619865893] range","detail":"{range_begin:/registry/minions/embed-certs-20211231102953-6736; range_end:; response_count:1; response_revision:511; }","duration":"134.035118ms","start":"2021-12-31T10:32:37.132Z","end":"2021-12-31T10:32:37.266Z","steps":["trace[619865893] 'range keys from in-memory index tree'  (duration: 133.800618ms)"],"step_count":1}
	{"level":"warn","ts":"2021-12-31T10:32:46.740Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"108.769669ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-20211231102953-6736\" ","response":"range_response_count:1 size:4942"}
	{"level":"info","ts":"2021-12-31T10:32:46.740Z","caller":"traceutil/trace.go:171","msg":"trace[1529683504] range","detail":"{range_begin:/registry/minions/embed-certs-20211231102953-6736; range_end:; response_count:1; response_revision:512; }","duration":"108.867526ms","start":"2021-12-31T10:32:46.632Z","end":"2021-12-31T10:32:46.740Z","steps":["trace[1529683504] 'range keys from in-memory index tree'  (duration: 108.625105ms)"],"step_count":1}
	{"level":"warn","ts":"2021-12-31T10:32:47.049Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"151.307928ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3238508214417130964 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.58.2\" mod_revision:511 > success:<request_put:<key:\"/registry/masterleases/192.168.58.2\" value_size:67 lease:3238508214417130962 >> failure:<request_range:<key:\"/registry/masterleases/192.168.58.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2021-12-31T10:32:47.049Z","caller":"traceutil/trace.go:171","msg":"trace[1806958756] transaction","detail":"{read_only:false; response_revision:513; number_of_response:1; }","duration":"302.801998ms","start":"2021-12-31T10:32:46.746Z","end":"2021-12-31T10:32:47.049Z","steps":["trace[1806958756] 'process raft request'  (duration: 151.330883ms)","trace[1806958756] 'compare'  (duration: 151.186541ms)"],"step_count":2}
	{"level":"warn","ts":"2021-12-31T10:32:47.049Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-12-31T10:32:46.746Z","time spent":"302.874046ms","remote":"127.0.0.1:48400","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.58.2\" mod_revision:511 > success:<request_put:<key:\"/registry/masterleases/192.168.58.2\" value_size:67 lease:3238508214417130962 >> failure:<request_range:<key:\"/registry/masterleases/192.168.58.2\" > >"}
	{"level":"warn","ts":"2021-12-31T10:32:47.395Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"217.090228ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-12-31T10:32:47.395Z","caller":"traceutil/trace.go:171","msg":"trace[768014922] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:513; }","duration":"217.205911ms","start":"2021-12-31T10:32:47.178Z","end":"2021-12-31T10:32:47.395Z","steps":["trace[768014922] 'range keys from in-memory index tree'  (duration: 216.99865ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  10:34:54 up  1:17,  0 users,  load average: 2.41, 2.31, 2.64
	Linux embed-certs-20211231102953-6736 5.11.0-1023-gcp #25~20.04.1-Ubuntu SMP Mon Nov 15 15:54:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [eb97b3087125dceba3df5197317e35791b4ca2f794effdfa4d5118543d9d3072] <==
	* I1231 10:30:35.080871       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1231 10:30:35.086357       1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1231 10:30:35.089831       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I1231 10:30:35.105027       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I1231 10:30:35.185467       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I1231 10:30:35.190675       1 controller.go:611] quota admission added evaluator for: namespaces
	I1231 10:30:35.962970       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1231 10:30:35.963003       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1231 10:30:35.970498       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I1231 10:30:35.974237       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I1231 10:30:35.974274       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I1231 10:30:36.585265       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1231 10:30:36.625041       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1231 10:30:36.698601       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1231 10:30:36.708729       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1231 10:30:36.709846       1 controller.go:611] quota admission added evaluator for: endpoints
	I1231 10:30:36.714558       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1231 10:30:37.200677       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1231 10:30:38.041816       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1231 10:30:38.051264       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1231 10:30:38.081752       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1231 10:30:43.185304       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1231 10:30:51.624551       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1231 10:30:51.823628       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1231 10:30:52.493305       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [bd3c847642a9fe6d581ed824b26b0b73d8344e03d63122f560cf62e61a262cb3] <==
	* I1231 10:30:51.142552       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I1231 10:30:51.143335       1 event.go:294] "Event occurred" object="embed-certs-20211231102953-6736" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node embed-certs-20211231102953-6736 event: Registered Node embed-certs-20211231102953-6736 in Controller"
	I1231 10:30:51.147804       1 range_allocator.go:374] Set node embed-certs-20211231102953-6736 PodCIDR to [10.244.0.0/24]
	I1231 10:30:51.153429       1 event.go:294] "Event occurred" object="kube-system/kube-scheduler-embed-certs-20211231102953-6736" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1231 10:30:51.153457       1 event.go:294] "Event occurred" object="kube-system/etcd-embed-certs-20211231102953-6736" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1231 10:30:51.154844       1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-embed-certs-20211231102953-6736" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1231 10:30:51.158578       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager-embed-certs-20211231102953-6736" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1231 10:30:51.179056       1 shared_informer.go:247] Caches are synced for service account 
	I1231 10:30:51.199625       1 shared_informer.go:247] Caches are synced for attach detach 
	I1231 10:30:51.218729       1 shared_informer.go:247] Caches are synced for disruption 
	I1231 10:30:51.218771       1 disruption.go:371] Sending events to api server.
	I1231 10:30:51.219917       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I1231 10:30:51.271457       1 shared_informer.go:247] Caches are synced for cronjob 
	I1231 10:30:51.324558       1 shared_informer.go:247] Caches are synced for resource quota 
	I1231 10:30:51.331860       1 shared_informer.go:247] Caches are synced for resource quota 
	I1231 10:30:51.635933       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jfhh7"
	I1231 10:30:51.641599       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2gpsc"
	I1231 10:30:51.747935       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1231 10:30:51.792036       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1231 10:30:51.792069       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1231 10:30:51.830856       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I1231 10:30:52.031176       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-8fwps"
	I1231 10:30:52.080996       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-65b6p"
	I1231 10:30:52.526990       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I1231 10:30:52.594186       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-8fwps"
	
	* 
	* ==> kube-proxy [a380c0d98153cc2ab6095cc8b6eaf0192543775721df6c9cb6c5e4e510d1636d] <==
	* I1231 10:30:52.397068       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I1231 10:30:52.397149       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I1231 10:30:52.397183       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1231 10:30:52.487490       1 server_others.go:206] "Using iptables Proxier"
	I1231 10:30:52.487554       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1231 10:30:52.487568       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1231 10:30:52.487602       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1231 10:30:52.488071       1 server.go:656] "Version info" version="v1.23.1"
	I1231 10:30:52.489621       1 config.go:226] "Starting endpoint slice config controller"
	I1231 10:30:52.489639       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1231 10:30:52.489749       1 config.go:317] "Starting service config controller"
	I1231 10:30:52.489756       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1231 10:30:52.590564       1 shared_informer.go:247] Caches are synced for service config 
	I1231 10:30:52.590627       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [4d064efc3679beff93c0b83a48f6fdc82cb98e282e856beb780502ca855801a3] <==
	* E1231 10:30:35.193161       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1231 10:30:35.193184       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:30:35.193230       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1231 10:30:35.193236       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1231 10:30:35.196995       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1231 10:30:35.193715       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1231 10:30:35.197155       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1231 10:30:35.999536       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1231 10:30:35.999579       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1231 10:30:36.135168       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1231 10:30:36.135211       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1231 10:30:36.140141       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1231 10:30:36.140211       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1231 10:30:36.180265       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:30:36.180311       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1231 10:30:36.180317       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1231 10:30:36.180351       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1231 10:30:36.186253       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:30:36.186304       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1231 10:30:36.323948       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1231 10:30:36.323983       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1231 10:30:36.323990       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:30:36.324013       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1231 10:30:36.802803       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1231 10:30:37.896209       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-12-31 10:30:08 UTC, end at Fri 2021-12-31 10:34:54 UTC. --
	Dec 31 10:32:58 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:32:58.455359    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:33:03 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:33:03.456973    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:33:08 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:33:08.458425    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:33:13 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:33:13.459164    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:33:18 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:33:18.460948    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:33:23 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:33:23.461630    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:33:28 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:33:28.462484    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:33:33 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:33:33.463737    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:33:33 embed-certs-20211231102953-6736 kubelet[1285]: I1231 10:33:33.704568    1285 scope.go:110] "RemoveContainer" containerID="a0dcfaded27988532fd2f6cf67643d7ef3aa93cb98da8884fe6d6e539438d20a"
	Dec 31 10:33:38 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:33:38.465238    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:33:43 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:33:43.466737    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:33:48 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:33:48.467837    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:33:53 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:33:53.469144    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:33:58 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:33:58.470579    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:34:03 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:34:03.472349    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:34:08 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:34:08.474093    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:34:13 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:34:13.475280    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:34:18 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:34:18.476654    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:34:23 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:34:23.478091    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:34:28 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:34:28.479017    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:34:33 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:34:33.480218    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:34:38 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:34:38.481642    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:34:43 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:34:43.483129    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:34:48 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:34:48.484189    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:34:53 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:34:53.485096    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20211231102953-6736 -n embed-certs-20211231102953-6736

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
helpers_test.go:262: (dbg) Run:  kubectl --context embed-certs-20211231102953-6736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-64897985d-65b6p storage-provisioner
helpers_test.go:273: ======> post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context embed-certs-20211231102953-6736 describe pod coredns-64897985d-65b6p storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20211231102953-6736 describe pod coredns-64897985d-65b6p storage-provisioner: exit status 1 (65.678102ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-65b6p" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context embed-certs-20211231102953-6736 describe pod coredns-64897985d-65b6p storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (302.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (485.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20211231102602-6736 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [e2fc75e7-b1b7-4f20-81a5-67dbfbb8f086] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
E1231 10:31:04.440988    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: ***** TestStartStop/group/old-k8s-version/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:181: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20211231102602-6736 -n old-k8s-version-20211231102602-6736
start_stop_delete_test.go:181: TestStartStop/group/old-k8s-version/serial/DeployApp: showing logs for failed pods as of 2021-12-31 10:39:01.802736162 +0000 UTC m=+3440.784552172
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20211231102602-6736 describe po busybox -n default
start_stop_delete_test.go:181: (dbg) kubectl --context old-k8s-version-20211231102602-6736 describe po busybox -n default:
Name:         busybox
Namespace:    default
Priority:     0
Node:         <none>
Labels:       integration-test=busybox
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9ddtj (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
default-token-9ddtj:
Type:        Secret (a volume populated by a Secret)
SecretName:  default-token-9ddtj
Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                    From               Message
----     ------            ----                   ----               -------
Warning  FailedScheduling  8m                     default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
Warning  FailedScheduling  5m24s (x1 over 6m54s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20211231102602-6736 logs busybox -n default
start_stop_delete_test.go:181: (dbg) kubectl --context old-k8s-version-20211231102602-6736 logs busybox -n default:
start_stop_delete_test.go:181: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20211231102602-6736
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20211231102602-6736:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736",
	        "Created": "2021-12-31T10:26:13.51267746Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 209194,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-12-31T10:26:13.982747787Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/hostname",
	        "HostsPath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/hosts",
	        "LogPath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736-json.log",
	        "Name": "/old-k8s-version-20211231102602-6736",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20211231102602-6736:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20211231102602-6736",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd-init/diff:/var/lib/docker/overlay2/c534a67c1f4c5b9f1d9e8fda7874029a0b48633454e02a0b6c810aa9cf70f36b/diff:/var/lib/docker/overlay2/de7a9fc119b7ade7c9f3b24bea5108d56a2268af67d230db0cccbbf8190cc502/diff:/var/lib/docker/overlay2/7dde3b10db81d42c3216201bd8676123e0e230bfa173b31e053af8fa206a800e/diff:/var/lib/docker/overlay2/97454ec819da74666dcc72f8e5b4653886c44f3699cecf13a018ecf95d935273/diff:/var/lib/docker/overlay2/f63f64cf22a9591cbcafeb284a265882ea890544f2b083b747047bcf47a45f11/diff:/var/lib/docker/overlay2/85a2123491b64d14f7493b3bc6a519d9df28cc3680976b9e5a7ecf783be5a304/diff:/var/lib/docker/overlay2/2ec15009b968007c3be12325edca4c12821b52fc616ace66d456038c7f9b8b26/diff:/var/lib/docker/overlay2/c45e9285ae7064da0be5f07d29b035ad54f64e529db00ffe6f2dce6ad543dccf/diff:/var/lib/docker/overlay2/d95000893ca52351a7804b110df43c1656b4bc7fbe4940c9e0ff2744233b1f93/diff:/var/lib/docker/overlay2/38e3bf
07bc49f88e1e7214644d31095a6178f5156c6a5ff3fbcaf8676336a52d/diff:/var/lib/docker/overlay2/fe45a9a5036ea98542c5fd7808e66f8b6ffe9d0c545de700f7cbe39d6280e0e3/diff:/var/lib/docker/overlay2/8cd90b043b30c66cbeacbe4ca21881d85717c6e16f1fe627e2eecdeaf461da6c/diff:/var/lib/docker/overlay2/7221b66a068ae65d9a1c089fe5c60e7230aa950b3d47eb7e90a1776c2a45931b/diff:/var/lib/docker/overlay2/86ce10ecfb90cfd8e1f7d722715353b823ddfa58b93eefa2c1efffff899916ee/diff:/var/lib/docker/overlay2/12ea1316fade88c6e1d2272d6dbeb629875ff96f4dd6f6ead0ff6a4e7e3a3067/diff:/var/lib/docker/overlay2/0fe276873166ac11302901f1bb2c547c71e23fac2197e717e789b6ce7d1eb3ce/diff:/var/lib/docker/overlay2/81bd863acb94ff576a1f1f046ee87693930a12209e915a09f2a720c77d1357f3/diff:/var/lib/docker/overlay2/8875d87537d51e137e2fbff41fe61dadeefa46057e0c93d257ec4754b5ba95f3/diff:/var/lib/docker/overlay2/86e75ae4c708ad2e78d21326ca3b7f552b7903d8c7a0a66e086feaacd3a8c002/diff:/var/lib/docker/overlay2/c688ba4459332409ab260296b96d566d186d307b81308fb3b59724fe6fed3c7d/diff:/var/lib/d
ocker/overlay2/45c4b946a887c9de4f592591b3e65041083baba75d5bed9b0995339de1cafd02/diff:/var/lib/docker/overlay2/3927af3e4dc5e4af94ac76cbe5a3b9759b80e10646aabc655f5c9b56ae13e650/diff:/var/lib/docker/overlay2/11dd3efafe062edcbd4dd837545d570514492786b08abf77f57fe300d225eaa5/diff:/var/lib/docker/overlay2/b8e0f4d5369eb0c633092d9e21501d99ccb27fafaad6667209e1d988c337ae9b/diff:/var/lib/docker/overlay2/c605476b22b48b266163ca6ee5fcb52b15458f1d7a77458390109b6e795f72f3/diff:/var/lib/docker/overlay2/19b31873fbc8858f6b143ea8dc98572f32110c70c29732285bdea89dd6163287/diff:/var/lib/docker/overlay2/171afcaaccc199dc3a653d1bf42453959e7c6bd7bc1e8340f94454b5f5fb6d2d/diff:/var/lib/docker/overlay2/2000b0fa8d671b6e34ec28228440eb7e6a5ddcd6b2cb657836d8522e3fb4454e/diff:/var/lib/docker/overlay2/1294bf56384681b45b611fe07d7b32c6c87a67eee568452d1f8123bf319f7a4c/diff:/var/lib/docker/overlay2/3199bb762e90968c1998250099f781c0e8954f4d62df39a80a384b9208149421/diff:/var/lib/docker/overlay2/036a54cd701c84ee783285b30f6cf33b92858f1a11ebd5db15cffffea12
adcd9/diff:/var/lib/docker/overlay2/552a166f9d1f0d79a7c45c08db35c6db9366e5d8e1321b62ffc3d8a9ea458797/diff:/var/lib/docker/overlay2/98f34e7af675026bd6dd9aaf43b996b5a992e04fe33f238d16d38e7ccad2fb91/diff:/var/lib/docker/overlay2/c1ed97ca4514bd37713c82daa6ba805285f12b5aa51599e9f915b8755fbf55a7/diff:/var/lib/docker/overlay2/86b8ba00550442fc509be298ff07205e99a171d56a70a2e1735cdfd4a9ab745e/diff:/var/lib/docker/overlay2/109092c1424dfee38a4eb63e39a9bb5d3de4b011f84e30bd021bf73ee2f47aa1/diff:/var/lib/docker/overlay2/87fd60ac9e5dd5a701def2c0a9e1c64463bd4779be06a990c86e5561a4349312/diff:/var/lib/docker/overlay2/145545da7fd578da8a41adec4cec3a9b347b4e609c900ee9365ae7735c7859c6/diff:/var/lib/docker/overlay2/e6a20d33de71269404b7fdc1fa6f915dcbc5a337e0625ef4536cb942bb55c19b/diff:/var/lib/docker/overlay2/38829872ccac03d0dae45b93d1f34e61fd179b6fc19f22fd27d4431ee773090f/diff:/var/lib/docker/overlay2/3c7fa7cc3994b36d6b984ebe7c46b535829f02814130486c4ecf5a0976f4f7dc/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20211231102602-6736",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20211231102602-6736/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20211231102602-6736",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20211231102602-6736",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20211231102602-6736",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c8fb65b850e95d9291586c192e53e52d1c3afd0fdfabe6699d1eda53b3ac8da7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49387"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49386"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49383"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49385"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49384"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c8fb65b850e9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20211231102602-6736": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5984218b7d48",
	                        "old-k8s-version-20211231102602-6736"
	                    ],
	                    "NetworkID": "689da033f191c821bd60ad0334b0149b7450bc9a9e69f2e467eaea0327517488",
	                    "EndpointID": "a8e57908631bc9f44ae1933829bac6c4d6b691fb5425dc0680fa172c99243c35",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20211231102602-6736 -n old-k8s-version-20211231102602-6736
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20211231102602-6736 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-20211231102602-6736 logs -n 25: (1.320839379s)
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable metrics-server -p                                   | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:30:52 UTC | Fri, 31 Dec 2021 10:30:52 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:30:59 UTC | Fri, 31 Dec 2021 10:31:00 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:30:53 UTC | Fri, 31 Dec 2021 10:31:13 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:31:13 UTC | Fri, 31 Dec 2021 10:31:13 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p no-preload-20211231102928-6736                          | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:31:13 UTC | Fri, 31 Dec 2021 10:32:11 UTC |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:22 UTC | Fri, 31 Dec 2021 10:32:22 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| pause   | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:22 UTC | Fri, 31 Dec 2021 10:32:23 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| -p      | enable-default-cni-20211231101406-6736                     | enable-default-cni-20211231101406-6736         | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:24 UTC | Fri, 31 Dec 2021 10:32:25 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| unpause | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:24 UTC | Fri, 31 Dec 2021 10:32:25 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| delete  | -p                                                         | enable-default-cni-20211231101406-6736         | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:26 UTC | Fri, 31 Dec 2021 10:32:29 UTC |
	|         | enable-default-cni-20211231101406-6736                     |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:26 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20211231103229-6736      | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:29 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | disable-driver-mounts-20211231103229-6736                  |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:30 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20211231103230-6736 --memory=2200            | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:30 UTC | Fri, 31 Dec 2021 10:33:37 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:37 UTC | Fri, 31 Dec 2021 10:33:38 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:38 UTC | Fri, 31 Dec 2021 10:33:58 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:58 UTC | Fri, 31 Dec 2021 10:33:58 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20211231103230-6736 --memory=2200            | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:58 UTC | Fri, 31 Dec 2021 10:34:51 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:52 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:53 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                            | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:53 UTC | Fri, 31 Dec 2021 10:34:54 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:54 UTC | Fri, 31 Dec 2021 10:34:55 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:56 UTC | Fri, 31 Dec 2021 10:34:59 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:59 UTC | Fri, 31 Dec 2021 10:35:00 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736             | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:37:37 UTC | Fri, 31 Dec 2021 10:37:38 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/31 10:33:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.17.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1231 10:33:58.889763  239842 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:33:58.889968  239842 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:33:58.890015  239842 out.go:310] Setting ErrFile to fd 2...
	I1231 10:33:58.890028  239842 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:33:58.890301  239842 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:33:58.890755  239842 out.go:304] Setting JSON to false
	I1231 10:33:58.892928  239842 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4593,"bootTime":1640942245,"procs":596,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:33:58.893046  239842 start.go:122] virtualization: kvm guest
	I1231 10:33:58.896075  239842 out.go:176] * [newest-cni-20211231103230-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:33:58.898770  239842 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:33:58.896425  239842 notify.go:174] Checking for updates...
	I1231 10:33:58.901377  239842 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:33:58.904292  239842 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:33:58.906743  239842 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:33:58.909823  239842 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:33:58.911269  239842 config.go:176] Loaded profile config "newest-cni-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2-rc.0
	I1231 10:33:58.911745  239842 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:33:58.960055  239842 docker.go:132] docker version: linux-20.10.12
	I1231 10:33:58.960175  239842 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:33:59.061340  239842 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:49 SystemTime:2021-12-31 10:33:58.994194285 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:33:59.061470  239842 docker.go:237] overlay module found
	I1231 10:33:59.064676  239842 out.go:176] * Using the docker driver based on existing profile
	I1231 10:33:59.064715  239842 start.go:280] selected driver: docker
	I1231 10:33:59.064721  239842 start.go:795] validating driver "docker" against &{Name:newest-cni-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2-rc.0 ClusterName:newest-cni-20211231103230-6736 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.2-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.
io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:33:59.064864  239842 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:33:59.064877  239842 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:33:59.064882  239842 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:33:59.064913  239842 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:33:59.064992  239842 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:33:59.067375  239842 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:33:59.068079  239842 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:33:59.179516  239842 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:49 SystemTime:2021-12-31 10:33:59.103117577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	W1231 10:33:59.179717  239842 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:33:59.179756  239842 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:33:59.182917  239842 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:33:59.183064  239842 start_flags.go:829] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1231 10:33:59.183104  239842 cni.go:93] Creating CNI manager for ""
	I1231 10:33:59.183116  239842 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:33:59.183124  239842 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1231 10:33:59.183133  239842 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1231 10:33:59.183142  239842 start_flags.go:298] config:
	{Name:newest-cni-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2-rc.0 ClusterName:newest-cni-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.2-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries
:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:33:59.186972  239842 out.go:176] * Starting control plane node newest-cni-20211231103230-6736 in cluster newest-cni-20211231103230-6736
	I1231 10:33:59.187046  239842 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:33:59.189137  239842 out.go:176] * Pulling base image ...
	I1231 10:33:59.189233  239842 preload.go:132] Checking if preload exists for k8s version v1.23.2-rc.0 and runtime containerd
	I1231 10:33:59.189311  239842 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.2-rc.0-containerd-overlay2-amd64.tar.lz4
	I1231 10:33:59.189340  239842 cache.go:57] Caching tarball of preloaded images
	I1231 10:33:59.189397  239842 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:33:59.189738  239842 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.2-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:33:59.189762  239842 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.2-rc.0 on containerd
	I1231 10:33:59.189945  239842 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/config.json ...
	I1231 10:33:59.232549  239842 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:33:59.232610  239842 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:33:59.232633  239842 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:33:59.232677  239842 start.go:313] acquiring machines lock for newest-cni-20211231103230-6736: {Name:mkea4a41968f23a7f754ed1625a06fab4a3434ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:33:59.232826  239842 start.go:317] acquired machines lock for "newest-cni-20211231103230-6736" in 116.689µs
	I1231 10:33:59.232869  239842 start.go:93] Skipping create...Using existing machine configuration
	I1231 10:33:59.232883  239842 fix.go:55] fixHost starting: 
	I1231 10:33:59.233271  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:33:59.275589  239842 fix.go:108] recreateIfNeeded on newest-cni-20211231103230-6736: state=Stopped err=<nil>
	W1231 10:33:59.275624  239842 fix.go:134] unexpected machine state, will restart: <nil>
	I1231 10:33:57.611681  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:00.110928  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:33:58.633918  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:00.634303  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:03.134303  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:59.278797  239842 out.go:176] * Restarting existing docker container for "newest-cni-20211231103230-6736" ...
	I1231 10:33:59.278893  239842 cli_runner.go:133] Run: docker start newest-cni-20211231103230-6736
	I1231 10:33:59.808917  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:33:59.856009  239842 kic.go:420] container "newest-cni-20211231103230-6736" state is running.
	I1231 10:33:59.856565  239842 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20211231103230-6736
	I1231 10:33:59.904405  239842 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/config.json ...
	I1231 10:33:59.904680  239842 machine.go:88] provisioning docker machine ...
	I1231 10:33:59.904703  239842 ubuntu.go:169] provisioning hostname "newest-cni-20211231103230-6736"
	I1231 10:33:59.904740  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:33:59.947914  239842 main.go:130] libmachine: Using SSH client type: native
	I1231 10:33:59.948105  239842 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I1231 10:33:59.948124  239842 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20211231103230-6736 && echo "newest-cni-20211231103230-6736" | sudo tee /etc/hostname
	I1231 10:33:59.949036  239842 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47278->127.0.0.1:49417: read: connection reset by peer
	I1231 10:34:03.099652  239842 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20211231103230-6736
	
	I1231 10:34:03.099755  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:03.144059  239842 main.go:130] libmachine: Using SSH client type: native
	I1231 10:34:03.144255  239842 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I1231 10:34:03.144303  239842 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20211231103230-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20211231103230-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20211231103230-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:34:03.284958  239842 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:34:03.284998  239842 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:34:03.285062  239842 ubuntu.go:177] setting up certificates
	I1231 10:34:03.285076  239842 provision.go:83] configureAuth start
	I1231 10:34:03.285144  239842 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20211231103230-6736
	I1231 10:34:03.331319  239842 provision.go:138] copyHostCerts
	I1231 10:34:03.331385  239842 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:34:03.331393  239842 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:34:03.331460  239842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:34:03.331544  239842 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:34:03.331558  239842 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:34:03.331579  239842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:34:03.331625  239842 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:34:03.331638  239842 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:34:03.331657  239842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:34:03.331695  239842 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20211231103230-6736 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20211231103230-6736]
	I1231 10:34:03.586959  239842 provision.go:172] copyRemoteCerts
	I1231 10:34:03.587049  239842 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:34:03.587091  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:03.625102  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:03.720644  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:34:03.740753  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I1231 10:34:03.760615  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1231 10:34:03.780213  239842 provision.go:86] duration metric: configureAuth took 495.114028ms
	I1231 10:34:03.780262  239842 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:34:03.780481  239842 config.go:176] Loaded profile config "newest-cni-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2-rc.0
	I1231 10:34:03.780495  239842 machine.go:91] provisioned docker machine in 3.875801286s
	I1231 10:34:03.780501  239842 start.go:267] post-start starting for "newest-cni-20211231103230-6736" (driver="docker")
	I1231 10:34:03.780506  239842 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:34:03.780545  239842 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:34:03.780586  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:02.111051  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:04.112682  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:05.633463  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:08.133359  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:03.820417  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:03.925875  239842 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:34:03.930813  239842 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:34:03.930851  239842 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:34:03.930864  239842 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:34:03.930872  239842 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:34:03.930885  239842 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:34:03.930949  239842 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:34:03.931028  239842 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:34:03.931126  239842 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:34:03.940439  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:34:03.962877  239842 start.go:270] post-start completed in 182.361292ms
	I1231 10:34:03.962946  239842 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:34:03.962978  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:04.003202  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:04.101581  239842 fix.go:57] fixHost completed within 4.868692803s
	I1231 10:34:04.101619  239842 start.go:80] releasing machines lock for "newest-cni-20211231103230-6736", held for 4.868770707s
	I1231 10:34:04.101715  239842 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20211231103230-6736
	I1231 10:34:04.141381  239842 ssh_runner.go:195] Run: systemctl --version
	I1231 10:34:04.141446  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:04.141386  239842 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:34:04.141549  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:04.184922  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:04.186345  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:04.300405  239842 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:34:04.314857  239842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:34:04.326678  239842 docker.go:158] disabling docker service ...
	I1231 10:34:04.326732  239842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:34:04.338952  239842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:34:04.350188  239842 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:34:04.441879  239842 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:34:04.523149  239842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:34:04.533774  239842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:34:04.548558  239842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:34:04.563078  239842 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:34:04.570184  239842 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:34:04.577904  239842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:34:04.661512  239842 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:34:04.743598  239842 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:34:04.743665  239842 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:34:04.748322  239842 start.go:458] Will wait 60s for crictl version
	I1231 10:34:04.748376  239842 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:34:04.776556  239842 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:34:04Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:34:06.611941  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:09.111247  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:10.633648  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:13.133617  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:11.111440  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:13.111520  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:15.111758  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:15.824426  239842 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:34:15.852646  239842 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:34:15.852709  239842 ssh_runner.go:195] Run: containerd --version
	I1231 10:34:15.875454  239842 ssh_runner.go:195] Run: containerd --version
	I1231 10:34:15.899983  239842 out.go:176] * Preparing Kubernetes v1.23.2-rc.0 on containerd 1.4.12 ...
	I1231 10:34:15.900089  239842 cli_runner.go:133] Run: docker network inspect newest-cni-20211231103230-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:34:15.938827  239842 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1231 10:34:15.942919  239842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:34:15.956827  239842 out.go:176]   - kubelet.network-plugin=cni
	I1231 10:34:15.959219  239842 out.go:176]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I1231 10:34:15.961312  239842 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:34:15.963727  239842 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:34:15.632937  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:18.132486  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:15.966056  239842 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:34:15.966172  239842 preload.go:132] Checking if preload exists for k8s version v1.23.2-rc.0 and runtime containerd
	I1231 10:34:15.966243  239842 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:34:15.995265  239842 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:34:15.995290  239842 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:34:15.995331  239842 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:34:16.022435  239842 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:34:16.022458  239842 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:34:16.022504  239842 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:34:16.051296  239842 cni.go:93] Creating CNI manager for ""
	I1231 10:34:16.051321  239842 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:34:16.051335  239842 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I1231 10:34:16.051348  239842 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.2-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20211231103230-6736 NodeName:newest-cni-20211231103230-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-
elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:34:16.051545  239842 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20211231103230-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.2-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:34:16.051662  239842 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.2-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --global-housekeeping-interval=60m --hostname-override=newest-cni-20211231103230-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.2-rc.0 ClusterName:newest-cni-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1231 10:34:16.051732  239842 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2-rc.0
	I1231 10:34:16.059993  239842 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:34:16.060063  239842 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:34:16.068090  239842 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (679 bytes)
	I1231 10:34:16.083548  239842 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1231 10:34:16.098901  239842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1231 10:34:16.116150  239842 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:34:16.119950  239842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:34:16.130712  239842 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736 for IP: 192.168.76.2
	I1231 10:34:16.130826  239842 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:34:16.130889  239842 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:34:16.130980  239842 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/client.key
	I1231 10:34:16.131059  239842 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/apiserver.key.31bdca25
	I1231 10:34:16.131233  239842 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/proxy-client.key
	I1231 10:34:16.131373  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:34:16.131415  239842 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:34:16.131431  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:34:16.131463  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:34:16.131498  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:34:16.131533  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:34:16.131586  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:34:16.132861  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:34:16.154694  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1231 10:34:16.176000  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:34:16.199525  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1231 10:34:16.222324  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:34:16.245022  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:34:16.269767  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:34:16.296529  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:34:16.320759  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:34:16.344786  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:34:16.368318  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:34:16.389773  239842 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:34:16.405384  239842 ssh_runner.go:195] Run: openssl version
	I1231 10:34:16.411011  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:34:16.419791  239842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:34:16.423595  239842 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:34:16.423671  239842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:34:16.429324  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:34:16.437405  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:34:16.446166  239842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:34:16.449927  239842 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:34:16.450001  239842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:34:16.455858  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:34:16.464618  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:34:16.474030  239842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:34:16.478705  239842 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:34:16.478802  239842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:34:16.485176  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:34:16.493913  239842 kubeadm.go:388] StartCluster: {Name:newest-cni-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2-rc.0 ClusterName:newest-cni-20211231103230-6736 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.2-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 Me
tricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:34:16.494024  239842 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:34:16.494078  239842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:34:16.524647  239842 cri.go:87] found id: "155c2bf79c8bd8b8bb6dbeb56755a02c58b466899f6a9e677748a3b0d10686ed"
	I1231 10:34:16.524684  239842 cri.go:87] found id: "8d8c36be3cd9499af063e0e758f58669f30330492f2849aca5442c85468a63bd"
	I1231 10:34:16.524693  239842 cri.go:87] found id: "24d8438942a01eb4000995dcc71ad4b52b67206a5f4af1954e644740df495c62"
	I1231 10:34:16.524705  239842 cri.go:87] found id: "3c9717f9388efe5a2acdad99a248f83aeb684c526c2e02b91a89fd56616cb240"
	I1231 10:34:16.524711  239842 cri.go:87] found id: "f3c21531b1b800501ebef6dcb786a58ec6fe912c6da1b160f1d0589524631a5f"
	I1231 10:34:16.524717  239842 cri.go:87] found id: "f1cfa511a7febb69b44502241a766bc5f1da2755e54f022d08281b3b3c4551ee"
	I1231 10:34:16.524722  239842 cri.go:87] found id: ""
	I1231 10:34:16.524779  239842 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1231 10:34:16.541313  239842 cri.go:114] JSON = null
	W1231 10:34:16.541371  239842 kubeadm.go:395] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I1231 10:34:16.541451  239842 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:34:16.549111  239842 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1231 10:34:16.556666  239842 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:16.557897  239842 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20211231103230-6736" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:34:16.558554  239842 kubeconfig.go:127] "newest-cni-20211231103230-6736" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig - will repair!
	I1231 10:34:16.559659  239842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:34:16.562421  239842 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1231 10:34:16.570276  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:16.570337  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:16.585156  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:16.785503  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:16.785582  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:16.803739  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:16.986016  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:16.986082  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:17.003346  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:17.185436  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:17.185522  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:17.201434  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:17.385753  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:17.385834  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:17.401272  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:17.585464  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:17.585557  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:17.602009  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:17.786319  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:17.786390  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:17.801129  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:17.985405  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:17.985492  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:18.002220  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:18.185431  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:18.185522  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:18.200513  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:18.385798  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:18.385886  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:18.404508  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:18.585753  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:18.585841  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:18.601638  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:18.785922  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:18.786018  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1231 10:34:17.611312  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:20.111421  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	W1231 10:34:18.803740  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:18.985967  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:18.986067  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:19.002808  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:19.186110  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:19.186208  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:19.202949  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:19.386242  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:19.386333  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:19.403965  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:19.586311  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:19.586409  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:19.602884  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:19.602907  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:19.602960  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:19.619190  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W1231 10:34:19.619231  239842 kubeadm.go:600] needs reconfigure: apiserver error: timed out waiting for the condition
	I1231 10:34:19.619258  239842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2-rc.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1231 10:34:20.346752  239842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:34:20.358199  239842 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:34:20.367408  239842 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:34:20.367462  239842 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:34:20.377071  239842 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:34:20.377137  239842 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:34:20.729691  239842 out.go:203]   - Generating certificates and keys ...
	I1231 10:34:20.132812  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:22.133593  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:21.380175  239842 out.go:203]   - Booting up control plane ...
	I1231 10:34:22.111520  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:24.613409  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:24.633597  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:27.134575  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:27.111697  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:29.610920  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:29.633090  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:31.633767  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:34.434058  239842 out.go:203]   - Configuring RBAC rules ...
	I1231 10:34:34.849120  239842 cni.go:93] Creating CNI manager for ""
	I1231 10:34:34.849163  239842 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:34:31.611280  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:34.111498  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:34.133525  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:36.632536  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:34.852842  239842 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:34:34.852967  239842 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:34:34.857479  239842 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl ...
	I1231 10:34:34.857506  239842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:34:34.874366  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:34:35.630846  239842 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:34:35.630937  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:35.630961  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=newest-cni-20211231103230-6736 minikube.k8s.io/updated_at=2021_12_31T10_34_35_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:35.651672  239842 ops.go:34] apiserver oom_adj: -16
	I1231 10:34:35.719909  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:36.316492  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:36.815787  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:37.316079  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:37.816415  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:38.316091  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:36.610865  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:38.611394  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:38.632694  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:40.634440  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:43.133415  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:38.815995  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:39.316530  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:39.816139  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:40.315766  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:40.815697  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:41.316307  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:41.816101  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:42.316374  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:42.816315  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:43.316484  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:40.611615  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:42.612982  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:45.111363  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:45.633500  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:48.131867  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:43.816729  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:44.316037  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:44.815895  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:45.317082  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:45.816101  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:46.315798  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:46.815886  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:47.316269  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:47.815962  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:48.003966  239842 kubeadm.go:864] duration metric: took 12.373088498s to wait for elevateKubeSystemPrivileges.
	I1231 10:34:48.003999  239842 kubeadm.go:390] StartCluster complete in 31.510097342s
	I1231 10:34:48.004022  239842 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:34:48.004121  239842 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:34:48.005914  239842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:34:48.526056  239842 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20211231103230-6736" rescaled to 1
	I1231 10:34:48.526130  239842 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.2-rc.0 ControlPlane:true Worker:true}
	I1231 10:34:48.526152  239842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:34:48.528894  239842 out.go:176] * Verifying Kubernetes components...
	I1231 10:34:48.528963  239842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:34:48.526213  239842 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I1231 10:34:48.529044  239842 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20211231103230-6736"
	I1231 10:34:48.529069  239842 addons.go:65] Setting dashboard=true in profile "newest-cni-20211231103230-6736"
	I1231 10:34:48.529085  239842 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20211231103230-6736"
	I1231 10:34:48.529105  239842 addons.go:65] Setting metrics-server=true in profile "newest-cni-20211231103230-6736"
	I1231 10:34:48.529112  239842 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20211231103230-6736"
	I1231 10:34:48.529125  239842 addons.go:153] Setting addon metrics-server=true in "newest-cni-20211231103230-6736"
	W1231 10:34:48.529134  239842 addons.go:165] addon metrics-server should already be in state true
	I1231 10:34:48.529072  239842 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20211231103230-6736"
	I1231 10:34:48.529172  239842 host.go:66] Checking if "newest-cni-20211231103230-6736" exists ...
	W1231 10:34:48.529178  239842 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:34:48.529204  239842 host.go:66] Checking if "newest-cni-20211231103230-6736" exists ...
	I1231 10:34:48.529086  239842 addons.go:153] Setting addon dashboard=true in "newest-cni-20211231103230-6736"
	W1231 10:34:48.529469  239842 addons.go:165] addon dashboard should already be in state true
	I1231 10:34:48.529489  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:34:48.529507  239842 host.go:66] Checking if "newest-cni-20211231103230-6736" exists ...
	I1231 10:34:48.529669  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:34:48.529673  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:34:48.526424  239842 config.go:176] Loaded profile config "newest-cni-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2-rc.0
	I1231 10:34:48.530032  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:34:48.586829  239842 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1231 10:34:48.590085  239842 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1231 10:34:48.589138  239842 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20211231103230-6736"
	W1231 10:34:48.590194  239842 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:34:48.590280  239842 host.go:66] Checking if "newest-cni-20211231103230-6736" exists ...
	I1231 10:34:48.590345  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1231 10:34:48.590360  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1231 10:34:48.590419  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:48.593339  239842 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1231 10:34:48.593453  239842 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1231 10:34:48.593465  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1231 10:34:48.593553  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:48.591124  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:34:48.600479  239842 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:34:48.601216  239842 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:34:48.601358  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:34:48.601494  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:48.659910  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:48.665029  239842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:34:48.665195  239842 api_server.go:51] waiting for apiserver process to appear ...
	I1231 10:34:48.665258  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1231 10:34:48.667125  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:48.679789  239842 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:34:48.679853  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:34:48.679963  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:48.686805  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:48.739096  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:47.111713  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:49.112487  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:48.902948  239842 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1231 10:34:48.902984  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1231 10:34:48.903155  239842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:34:48.903256  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1231 10:34:48.903281  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1231 10:34:49.003786  239842 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1231 10:34:49.003825  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1231 10:34:49.006836  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1231 10:34:49.006870  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1231 10:34:49.007822  239842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:34:49.095461  239842 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:34:49.095567  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1231 10:34:49.102715  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1231 10:34:49.102747  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1231 10:34:49.206111  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1231 10:34:49.206155  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1231 10:34:49.207830  239842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:34:49.384988  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1231 10:34:49.385020  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1231 10:34:49.392682  239842 api_server.go:71] duration metric: took 866.519409ms to wait for apiserver process to appear ...
	I1231 10:34:49.392800  239842 api_server.go:87] waiting for apiserver healthz status ...
	I1231 10:34:49.392828  239842 start.go:773] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I1231 10:34:49.392831  239842 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1231 10:34:49.403071  239842 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1231 10:34:49.404217  239842 api_server.go:140] control plane version: v1.23.2-rc.0
	I1231 10:34:49.404287  239842 api_server.go:130] duration metric: took 11.464686ms to wait for apiserver health ...
	I1231 10:34:49.404308  239842 system_pods.go:43] waiting for kube-system pods to appear ...
	I1231 10:34:49.488542  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1231 10:34:49.488611  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1231 10:34:49.497619  239842 system_pods.go:59] 7 kube-system pods found
	I1231 10:34:49.497777  239842 system_pods.go:61] "coredns-64897985d-fh6sl" [f7a107d1-df7c-4b28-8f0d-eb5e6da38e4f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I1231 10:34:49.497853  239842 system_pods.go:61] "etcd-newest-cni-20211231103230-6736" [9212b001-2098-4378-b03f-05510269335f] Running
	I1231 10:34:49.497879  239842 system_pods.go:61] "kindnet-tkvfw" [4abcfbc0-b4e3-41f0-89d4-26c9a356f41e] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1231 10:34:49.497904  239842 system_pods.go:61] "kube-apiserver-newest-cni-20211231103230-6736" [a37117a3-3753-4e5b-b2f3-5d129612ee51] Running
	I1231 10:34:49.497934  239842 system_pods.go:61] "kube-controller-manager-newest-cni-20211231103230-6736" [bae63363-f3ff-4de8-9600-07baeb9a1915] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1231 10:34:49.497959  239842 system_pods.go:61] "kube-proxy-228gt" [8d98d417-a803-474a-b07d-aa7c25391bd9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1231 10:34:49.497987  239842 system_pods.go:61] "kube-scheduler-newest-cni-20211231103230-6736" [1a0e5162-834e-4b7f-815a-66f6b1511153] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1231 10:34:49.498011  239842 system_pods.go:74] duration metric: took 93.696354ms to wait for pod list to return data ...
	I1231 10:34:49.498038  239842 default_sa.go:34] waiting for default service account to be created ...
	I1231 10:34:49.501869  239842 default_sa.go:45] found service account: "default"
	I1231 10:34:49.501947  239842 default_sa.go:55] duration metric: took 3.889685ms for default service account to be created ...
	I1231 10:34:49.501972  239842 kubeadm.go:542] duration metric: took 975.813843ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1231 10:34:49.502004  239842 node_conditions.go:102] verifying NodePressure condition ...
	I1231 10:34:49.584377  239842 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I1231 10:34:49.584456  239842 node_conditions.go:123] node cpu capacity is 8
	I1231 10:34:49.584501  239842 node_conditions.go:105] duration metric: took 82.480868ms to run NodePressure ...
	I1231 10:34:49.584516  239842 start.go:211] waiting for startup goroutines ...
	I1231 10:34:49.590387  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1231 10:34:49.590472  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1231 10:34:49.687079  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1231 10:34:49.687111  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1231 10:34:49.786712  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:34:49.786813  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1231 10:34:49.897022  239842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:34:50.206491  239842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.303301914s)
	I1231 10:34:50.206645  239842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.198797907s)
	I1231 10:34:50.596403  239842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.388529563s)
	I1231 10:34:50.596444  239842 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20211231103230-6736"
	I1231 10:34:51.611369  239842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.714292146s)
	I1231 10:34:51.613756  239842 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1231 10:34:51.613803  239842 addons.go:417] enableAddons completed in 3.087611947s
	I1231 10:34:51.651587  239842 start.go:493] kubectl: 1.23.1, cluster: 1.23.2-rc.0 (minor skew: 0)
	I1231 10:34:51.654312  239842 out.go:176] * Done! kubectl is now configured to use "newest-cni-20211231103230-6736" cluster and "default" namespace by default
	I1231 10:34:50.132740  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:52.134471  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:53.136634  219726 node_ready.go:38] duration metric: took 4m0.01292471s waiting for node "embed-certs-20211231102953-6736" to be "Ready" ...
	I1231 10:34:53.140709  219726 out.go:176] 
	W1231 10:34:53.140941  219726 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:34:53.140961  219726 out.go:241] * 
	W1231 10:34:53.141727  219726 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1231 10:34:51.611136  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:53.611579  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:56.111245  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:58.111520  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:00.610615  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:02.611131  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:04.611362  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:06.611510  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:09.110821  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:11.610753  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:13.611463  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:16.111356  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:18.611389  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:21.111536  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:23.610710  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:25.611361  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:28.111622  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:30.610549  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:32.610684  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:34.612116  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:37.111767  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:39.611741  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:42.111589  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:44.611029  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:47.111644  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:49.611065  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:52.111090  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:54.111164  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:56.611524  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:59.112071  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:01.610398  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:03.611163  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:06.111054  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:08.610990  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:10.611355  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:13.111940  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:15.610865  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:17.611051  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:20.111559  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:22.611443  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:25.110897  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:27.111536  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:29.111842  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:31.611346  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:34.111760  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:36.611457  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:38.611614  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:40.611681  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:43.111250  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:45.111518  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:47.611374  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:50.111716  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:52.611710  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:55.111552  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:57.611111  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:59.611292  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:01.611646  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:04.112863  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:06.611407  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:08.611765  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:11.111509  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:13.112579  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:15.611537  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:17.611860  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:20.111361  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:22.611306  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:24.611757  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:26.611944  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:29.112062  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:31.611278  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:34.110991  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:36.611232  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:36.613526  232840 node_ready.go:38] duration metric: took 4m0.012005111s waiting for node "default-k8s-different-port-20211231103230-6736" to be "Ready" ...
	I1231 10:37:36.616503  232840 out.go:176] 
	W1231 10:37:36.616735  232840 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:37:36.616764  232840 out.go:241] * 
	W1231 10:37:36.617727  232840 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	9711ffb10b897       6de166512aa22       3 minutes ago       Exited              kindnet-cni               3                   04b659f964be5
	91f9570ac5962       c21b0c7400f98       12 minutes ago      Running             kube-proxy                0                   6f185cd7b6c56
	090a101afa0e5       b2756210eeabf       12 minutes ago      Running             etcd                      0                   295e37d445215
	a0fea282c2cab       b305571ca60a5       12 minutes ago      Running             kube-apiserver            0                   410795c7cb2b8
	fddc6f96e1ab6       301ddc62b80b1       12 minutes ago      Running             kube-scheduler            0                   e70ba3548c048
	c5161903fa798       06a629a7e51cd       12 minutes ago      Running             kube-controller-manager   0                   3badc9a2068b0
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-12-31 10:26:14 UTC, end at Fri 2021-12-31 10:39:03 UTC. --
	Dec 31 10:32:23 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:32:23.121007989Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:32:23 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:32:23.693445401Z" level=info msg="RemoveContainer for \"f3c41a92f3dcb0d899b68d0a3105a9d8038905ebe927dc4ccaa705e9c46d75e7\""
	Dec 31 10:32:23 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:32:23.699838948Z" level=info msg="RemoveContainer for \"f3c41a92f3dcb0d899b68d0a3105a9d8038905ebe927dc4ccaa705e9c46d75e7\" returns successfully"
	Dec 31 10:32:36 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:32:36.028187479Z" level=info msg="CreateContainer within sandbox \"04b659f964be5884b7c46b53d59c3b0a04d83f2ed13fcf0dbb6db8356fd79926\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Dec 31 10:32:50 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:32:50.194577290Z" level=info msg="CreateContainer within sandbox \"04b659f964be5884b7c46b53d59c3b0a04d83f2ed13fcf0dbb6db8356fd79926\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"8649e88571b04b9017abf629d51e5fc35bbc02554483100f52efe255275e0b71\""
	Dec 31 10:32:50 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:32:50.195369432Z" level=info msg="StartContainer for \"8649e88571b04b9017abf629d51e5fc35bbc02554483100f52efe255275e0b71\""
	Dec 31 10:32:50 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:32:50.406751724Z" level=info msg="StartContainer for \"8649e88571b04b9017abf629d51e5fc35bbc02554483100f52efe255275e0b71\" returns successfully"
	Dec 31 10:35:30 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:35:30.697424396Z" level=info msg="Finish piping stderr of container \"8649e88571b04b9017abf629d51e5fc35bbc02554483100f52efe255275e0b71\""
	Dec 31 10:35:30 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:35:30.697545851Z" level=info msg="Finish piping stdout of container \"8649e88571b04b9017abf629d51e5fc35bbc02554483100f52efe255275e0b71\""
	Dec 31 10:35:30 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:35:30.698495474Z" level=info msg="TaskExit event &TaskExit{ContainerID:8649e88571b04b9017abf629d51e5fc35bbc02554483100f52efe255275e0b71,ID:8649e88571b04b9017abf629d51e5fc35bbc02554483100f52efe255275e0b71,Pid:3149,ExitStatus:2,ExitedAt:2021-12-31 10:35:30.698074741 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:35:30 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:35:30.727245760Z" level=info msg="shim disconnected" id=8649e88571b04b9017abf629d51e5fc35bbc02554483100f52efe255275e0b71
	Dec 31 10:35:30 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:35:30.727370050Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:35:30 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:35:30.997100704Z" level=info msg="RemoveContainer for \"56e208454f91938e954714cf0c234ee41095d874aaba052186b5db7c08821fa3\""
	Dec 31 10:35:31 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:35:31.004566807Z" level=info msg="RemoveContainer for \"56e208454f91938e954714cf0c234ee41095d874aaba052186b5db7c08821fa3\" returns successfully"
	Dec 31 10:35:57 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:35:57.025726225Z" level=info msg="CreateContainer within sandbox \"04b659f964be5884b7c46b53d59c3b0a04d83f2ed13fcf0dbb6db8356fd79926\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Dec 31 10:35:57 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:35:57.048117919Z" level=info msg="CreateContainer within sandbox \"04b659f964be5884b7c46b53d59c3b0a04d83f2ed13fcf0dbb6db8356fd79926\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"9711ffb10b897aecfb60ff957702f91f87cdd75e0701725914ee129e8c6799cb\""
	Dec 31 10:35:57 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:35:57.048735700Z" level=info msg="StartContainer for \"9711ffb10b897aecfb60ff957702f91f87cdd75e0701725914ee129e8c6799cb\""
	Dec 31 10:35:57 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:35:57.203719628Z" level=info msg="StartContainer for \"9711ffb10b897aecfb60ff957702f91f87cdd75e0701725914ee129e8c6799cb\" returns successfully"
	Dec 31 10:38:37 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:38:37.502138131Z" level=info msg="Finish piping stderr of container \"9711ffb10b897aecfb60ff957702f91f87cdd75e0701725914ee129e8c6799cb\""
	Dec 31 10:38:37 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:38:37.502229882Z" level=info msg="Finish piping stdout of container \"9711ffb10b897aecfb60ff957702f91f87cdd75e0701725914ee129e8c6799cb\""
	Dec 31 10:38:37 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:38:37.503021538Z" level=info msg="TaskExit event &TaskExit{ContainerID:9711ffb10b897aecfb60ff957702f91f87cdd75e0701725914ee129e8c6799cb,ID:9711ffb10b897aecfb60ff957702f91f87cdd75e0701725914ee129e8c6799cb,Pid:3601,ExitStatus:2,ExitedAt:2021-12-31 10:38:37.50263657 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:38:37 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:38:37.533047670Z" level=info msg="shim disconnected" id=9711ffb10b897aecfb60ff957702f91f87cdd75e0701725914ee129e8c6799cb
	Dec 31 10:38:37 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:38:37.533166582Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:38:38 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:38:38.279424060Z" level=info msg="RemoveContainer for \"8649e88571b04b9017abf629d51e5fc35bbc02554483100f52efe255275e0b71\""
	Dec 31 10:38:38 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:38:38.286244909Z" level=info msg="RemoveContainer for \"8649e88571b04b9017abf629d51e5fc35bbc02554483100f52efe255275e0b71\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20211231102602-6736
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20211231102602-6736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb
	                    minikube.k8s.io/name=old-k8s-version-20211231102602-6736
	                    minikube.k8s.io/updated_at=2021_12_31T10_26_42_0700
	                    minikube.k8s.io/version=v1.24.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 Dec 2021 10:26:37 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 Dec 2021 10:38:38 +0000   Fri, 31 Dec 2021 10:26:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 Dec 2021 10:38:38 +0000   Fri, 31 Dec 2021 10:26:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 Dec 2021 10:38:38 +0000   Fri, 31 Dec 2021 10:26:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 31 Dec 2021 10:38:38 +0000   Fri, 31 Dec 2021 10:26:33 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    old-k8s-version-20211231102602-6736
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32879780Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32879780Ki
	 pods:               110
	System Info:
	 Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	 System UUID:                5a8cca94-3bdf-4013-adda-72ef27798431
	 Boot ID:                    e6f4bca3-a61e-4e63-a68d-0c73a5c2c999
	 Kernel Version:             5.11.0-1023-gcp
	 OS Image:                   Ubuntu 20.04.2 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.4.12
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-20211231102602-6736                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kindnet-gjbqc                                                  100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                kube-apiserver-old-k8s-version-20211231102602-6736             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-controller-manager-old-k8s-version-20211231102602-6736    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-proxy-hdtr6                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-scheduler-old-k8s-version-20211231102602-6736             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                             Message
	  ----    ------                   ----               ----                                             -------
	  Normal  Starting                 12m                kubelet, old-k8s-version-20211231102602-6736     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet, old-k8s-version-20211231102602-6736     Node old-k8s-version-20211231102602-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet, old-k8s-version-20211231102602-6736     Node old-k8s-version-20211231102602-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet, old-k8s-version-20211231102602-6736     Node old-k8s-version-20211231102602-6736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet, old-k8s-version-20211231102602-6736     Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kube-proxy, old-k8s-version-20211231102602-6736  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 da bc c6 d3 d8 08 06
	[  +0.905080] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vetha337dd59
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 d5 c5 d8 9b b7 08 06
	[  +0.767220] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethad0df3e8
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 45 74 46 b2 ed 08 06
	[  +0.175235] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethfa30cc3c
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 f5 8d 27 39 52 08 06
	[  +2.267937] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.022406] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.024047] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.951751] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.011878] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023951] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.963808] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.003832] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023918] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	
	* 
	* ==> etcd [090a101afa0e5be4c178038538c1438ae269f1339bb853fc4beb2973fd8f69c6] <==
	* 2021-12-31 10:26:33.403290 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-12-31 10:26:33.404899 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-12-31 10:26:33.405089 I | embed: listening for metrics on http://192.168.49.2:2381
	2021-12-31 10:26:33.405340 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-12-31 10:26:34.390892 I | raft: aec36adc501070cc is starting a new election at term 1
	2021-12-31 10:26:34.390937 I | raft: aec36adc501070cc became candidate at term 2
	2021-12-31 10:26:34.390955 I | raft: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	2021-12-31 10:26:34.390970 I | raft: aec36adc501070cc became leader at term 2
	2021-12-31 10:26:34.390978 I | raft: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-12-31 10:26:34.391151 I | etcdserver: setting up the initial cluster version to 3.3
	2021-12-31 10:26:34.392751 N | etcdserver/membership: set the initial cluster version to 3.3
	2021-12-31 10:26:34.392798 I | etcdserver/api: enabled capabilities for version 3.3
	2021-12-31 10:26:34.392812 I | embed: ready to serve client requests
	2021-12-31 10:26:34.392846 I | etcdserver: published {Name:old-k8s-version-20211231102602-6736 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-12-31 10:26:34.392895 I | embed: ready to serve client requests
	2021-12-31 10:26:34.395926 I | embed: serving client requests on 127.0.0.1:2379
	2021-12-31 10:26:34.396034 I | embed: serving client requests on 192.168.49.2:2379
	2021-12-31 10:29:41.263024 W | etcdserver: request "header:<ID:8128010034796901496 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.2\" mod_revision:446 > success:<request_put:<key:\"/registry/masterleases/192.168.49.2\" value_size:67 lease:8128010034796901494 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.2\" > >>" with result "size:16" took too long (169.430776ms) to execute
	2021-12-31 10:29:41.263180 W | etcdserver: read-only range request "key:\"/registry/jobs\" range_end:\"/registry/jobt\" count_only:true " with result "range_response_count:0 size:5" took too long (170.047688ms) to execute
	2021-12-31 10:29:41.361952 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20211231102602-6736\" " with result "range_response_count:1 size:3551" took too long (245.828121ms) to execute
	2021-12-31 10:29:41.569861 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:215" took too long (305.209221ms) to execute
	2021-12-31 10:29:56.808390 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20211231102602-6736\" " with result "range_response_count:1 size:3551" took too long (191.596053ms) to execute
	2021-12-31 10:32:38.401363 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:1 size:799" took too long (109.529951ms) to execute
	2021-12-31 10:36:34.414815 I | mvcc: store.index: compact 484
	2021-12-31 10:36:34.415894 I | mvcc: finished scheduled compaction at 484 (took 593.306µs)
	
	* 
	* ==> kernel <==
	*  10:39:03 up  1:21,  0 users,  load average: 0.40, 1.27, 2.13
	Linux old-k8s-version-20211231102602-6736 5.11.0-1023-gcp #25~20.04.1-Ubuntu SMP Mon Nov 15 15:54:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [a0fea282c2cab22ac98fca2724b604a81ad02188a7148a610d923ce9704541fe] <==
	* I1231 10:26:37.667487       1 customresource_discovery_controller.go:208] Starting DiscoveryController
	I1231 10:26:37.667494       1 naming_controller.go:288] Starting NamingConditionController
	I1231 10:26:37.667500       1 establishing_controller.go:73] Starting EstablishingController
	E1231 10:26:37.691341       1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1231 10:26:37.778842       1 cache.go:39] Caches are synced for autoregister controller
	I1231 10:26:37.778919       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I1231 10:26:37.779139       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1231 10:26:37.779195       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1231 10:26:38.665987       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I1231 10:26:38.666024       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1231 10:26:38.666037       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1231 10:26:38.670056       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I1231 10:26:38.673207       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I1231 10:26:38.673236       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1231 10:26:40.447719       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1231 10:26:40.727470       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1231 10:26:41.010135       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1231 10:26:41.011069       1 controller.go:606] quota admission added evaluator for: endpoints
	I1231 10:26:41.096654       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1231 10:26:41.899110       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1231 10:26:42.121597       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1231 10:26:42.438856       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1231 10:26:57.699475       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1231 10:26:57.711823       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I1231 10:26:57.751097       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [c5161903fa79820ba4aac6aae4e2aa2335944ccae08a80bec50f7a09bcb290a0] <==
	* I1231 10:26:57.615461       1 shared_informer.go:204] Caches are synced for PV protection 
	I1231 10:26:57.656384       1 shared_informer.go:204] Caches are synced for persistent volume 
	I1231 10:26:57.682073       1 shared_informer.go:204] Caches are synced for ReplicaSet 
	I1231 10:26:57.697117       1 shared_informer.go:204] Caches are synced for deployment 
	I1231 10:26:57.700183       1 shared_informer.go:204] Caches are synced for attach detach 
	I1231 10:26:57.702252       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"c8554f51-52fd-4f6a-8e2b-35d79db7d7fa", APIVersion:"apps/v1", ResourceVersion:"201", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 2
	I1231 10:26:57.703189       1 shared_informer.go:204] Caches are synced for expand 
	I1231 10:26:57.709182       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"8967477e-14f1-4c1f-8c2b-7a0fe862d229", APIVersion:"apps/v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-g95dr
	I1231 10:26:57.717504       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"8967477e-14f1-4c1f-8c2b-7a0fe862d229", APIVersion:"apps/v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-cqjc7
	I1231 10:26:57.738063       1 shared_informer.go:204] Caches are synced for disruption 
	I1231 10:26:57.738105       1 disruption.go:341] Sending events to api server.
	I1231 10:26:57.747623       1 shared_informer.go:204] Caches are synced for daemon sets 
	I1231 10:26:57.787815       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"bdccb7fa-0064-4ee0-9ebc-fa377e485696", APIVersion:"apps/v1", ResourceVersion:"232", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-gjbqc
	I1231 10:26:57.787853       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"4d7e5ca0-b554-4379-9229-5965d7d0d5ba", APIVersion:"apps/v1", ResourceVersion:"215", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-hdtr6
	I1231 10:26:57.808977       1 shared_informer.go:204] Caches are synced for stateful set 
	I1231 10:26:57.809469       1 shared_informer.go:204] Caches are synced for garbage collector 
	I1231 10:26:57.809546       1 shared_informer.go:204] Caches are synced for resource quota 
	E1231 10:26:57.815277       1 daemon_controller.go:302] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"bdccb7fa-0064-4ee0-9ebc-fa377e485696", ResourceVersion:"232", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63776543202, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerati
ons\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.mk\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000627120), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:
[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000627140), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.Vsphere
VirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000627180), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolume
Source)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0006271e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)
(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.
Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000627220)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000627640)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resou
rce.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000ac17c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.Eph
emeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00136cc98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0012cc5a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.Resou
rceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000a3e020)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00136cce0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	E1231 10:26:57.816055       1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"4d7e5ca0-b554-4379-9229-5965d7d0d5ba", ResourceVersion:"215", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63776543202, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000626d00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Names
pace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeS
ource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001a48740), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000626d80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000626da0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.A
zureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.16.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000626ea0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMo
de)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000ac1680), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00136ca78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServic
eAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0012cc420), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy
{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000a3e018)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00136cab8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I1231 10:26:57.878955       1 shared_informer.go:204] Caches are synced for garbage collector 
	I1231 10:26:57.879123       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1231 10:26:57.889624       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"c8554f51-52fd-4f6a-8e2b-35d79db7d7fa", APIVersion:"apps/v1", ResourceVersion:"366", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-5644d7b6d9 to 1
	I1231 10:26:57.907023       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"8967477e-14f1-4c1f-8c2b-7a0fe862d229", APIVersion:"apps/v1", ResourceVersion:"367", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-5644d7b6d9-g95dr
	I1231 10:26:58.905952       1 shared_informer.go:197] Waiting for caches to sync for resource quota
	I1231 10:26:59.006259       1 shared_informer.go:204] Caches are synced for resource quota 
	
	* 
	* ==> kube-proxy [91f9570ac59622f25f47f9d8bf9cfa1ecd2c14cb7722777629a959e7f187d512] <==
	* W1231 10:26:58.698313       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1231 10:26:58.707406       1 node.go:135] Successfully retrieved node IP: 192.168.49.2
	I1231 10:26:58.707466       1 server_others.go:149] Using iptables Proxier.
	I1231 10:26:58.708676       1 server.go:529] Version: v1.16.0
	I1231 10:26:58.709278       1 config.go:131] Starting endpoints config controller
	I1231 10:26:58.709318       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1231 10:26:58.709660       1 config.go:313] Starting service config controller
	I1231 10:26:58.709692       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1231 10:26:58.809529       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1231 10:26:58.809853       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [fddc6f96e1ab6aff7257a3f3e9e946ae7b0d808bbca6e09ffc2653e63aa5c9e4] <==
	* E1231 10:26:37.805362       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1231 10:26:37.806980       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1231 10:26:37.807074       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:26:37.807236       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:26:37.882619       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1231 10:26:37.882673       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1231 10:26:37.882844       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:26:37.882953       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1231 10:26:37.883629       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1231 10:26:37.884708       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:26:37.885065       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:26:38.807000       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1231 10:26:38.808398       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1231 10:26:38.809398       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:26:38.810390       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:26:38.884131       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1231 10:26:38.885117       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1231 10:26:38.886254       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:26:38.887753       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1231 10:26:38.888713       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1231 10:26:38.889525       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:26:38.890428       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:26:57.722315       1 factory.go:585] pod is already present in the activeQ
	E1231 10:26:57.792300       1 factory.go:585] pod is already present in the activeQ
	E1231 10:26:59.497262       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-12-31 10:26:14 UTC, end at Fri 2021-12-31 10:39:03 UTC. --
	Dec 31 10:38:04 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:04.686465     863 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:38:07 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:07.333284     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:38:12 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:12.335994     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:38:14 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:14.722211     863 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:38:14 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:14.722260     863 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:38:17 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:17.337018     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:38:22 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:22.337975     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:38:24 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:24.755273     863 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:38:24 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:24.755316     863 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:38:27 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:27.338809     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:38:32 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:32.339728     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:38:34 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:34.789276     863 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:38:34 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:34.789344     863 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:38:37 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:37.340687     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:38:38 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:38.279231     863 pod_workers.go:191] Error syncing pod a71bd990-5819-4720-aba3-d5cdc1c779dd ("kindnet-gjbqc_kube-system(a71bd990-5819-4720-aba3-d5cdc1c779dd)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-gjbqc_kube-system(a71bd990-5819-4720-aba3-d5cdc1c779dd)"
	Dec 31 10:38:42 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:42.341767     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:38:44 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:44.819275     863 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:38:44 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:44.819311     863 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:38:47 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:47.342684     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:38:52 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:52.343558     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:38:53 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:53.023800     863 pod_workers.go:191] Error syncing pod a71bd990-5819-4720-aba3-d5cdc1c779dd ("kindnet-gjbqc_kube-system(a71bd990-5819-4720-aba3-d5cdc1c779dd)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-gjbqc_kube-system(a71bd990-5819-4720-aba3-d5cdc1c779dd)"
	Dec 31 10:38:54 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:54.849944     863 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:38:54 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:54.849994     863 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:38:57 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:57.344624     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:39:02 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:39:02.345575     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20211231102602-6736 -n old-k8s-version-20211231102602-6736
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20211231102602-6736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: busybox coredns-5644d7b6d9-cqjc7 storage-provisioner
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20211231102602-6736 describe pod busybox coredns-5644d7b6d9-cqjc7 storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20211231102602-6736 describe pod busybox coredns-5644d7b6d9-cqjc7 storage-provisioner: exit status 1 (76.194295ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9ddtj (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  default-token-9ddtj:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-9ddtj
	    Optional:    false
	QoS Class:       BestEffort
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  8m3s                   default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
	  Warning  FailedScheduling  5m26s (x1 over 6m56s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-cqjc7" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20211231102602-6736 describe pod busybox coredns-5644d7b6d9-cqjc7 storage-provisioner: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20211231102602-6736
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20211231102602-6736:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736",
	        "Created": "2021-12-31T10:26:13.51267746Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 209194,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-12-31T10:26:13.982747787Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/hostname",
	        "HostsPath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/hosts",
	        "LogPath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736-json.log",
	        "Name": "/old-k8s-version-20211231102602-6736",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20211231102602-6736:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20211231102602-6736",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd-init/diff:/var/lib/docker/overlay2/c534a67c1f4c5b9f1d9e8fda7874029a0b48633454e02a0b6c810aa9cf70f36b/diff:/var/lib/docker/overlay2/de7a9fc119b7ade7c9f3b24bea5108d56a2268af67d230db0cccbbf8190cc502/diff:/var/lib/docker/overlay2/7dde3b10db81d42c3216201bd8676123e0e230bfa173b31e053af8fa206a800e/diff:/var/lib/docker/overlay2/97454ec819da74666dcc72f8e5b4653886c44f3699cecf13a018ecf95d935273/diff:/var/lib/docker/overlay2/f63f64cf22a9591cbcafeb284a265882ea890544f2b083b747047bcf47a45f11/diff:/var/lib/docker/overlay2/85a2123491b64d14f7493b3bc6a519d9df28cc3680976b9e5a7ecf783be5a304/diff:/var/lib/docker/overlay2/2ec15009b968007c3be12325edca4c12821b52fc616ace66d456038c7f9b8b26/diff:/var/lib/docker/overlay2/c45e9285ae7064da0be5f07d29b035ad54f64e529db00ffe6f2dce6ad543dccf/diff:/var/lib/docker/overlay2/d95000893ca52351a7804b110df43c1656b4bc7fbe4940c9e0ff2744233b1f93/diff:/var/lib/docker/overlay2/38e3bf
07bc49f88e1e7214644d31095a6178f5156c6a5ff3fbcaf8676336a52d/diff:/var/lib/docker/overlay2/fe45a9a5036ea98542c5fd7808e66f8b6ffe9d0c545de700f7cbe39d6280e0e3/diff:/var/lib/docker/overlay2/8cd90b043b30c66cbeacbe4ca21881d85717c6e16f1fe627e2eecdeaf461da6c/diff:/var/lib/docker/overlay2/7221b66a068ae65d9a1c089fe5c60e7230aa950b3d47eb7e90a1776c2a45931b/diff:/var/lib/docker/overlay2/86ce10ecfb90cfd8e1f7d722715353b823ddfa58b93eefa2c1efffff899916ee/diff:/var/lib/docker/overlay2/12ea1316fade88c6e1d2272d6dbeb629875ff96f4dd6f6ead0ff6a4e7e3a3067/diff:/var/lib/docker/overlay2/0fe276873166ac11302901f1bb2c547c71e23fac2197e717e789b6ce7d1eb3ce/diff:/var/lib/docker/overlay2/81bd863acb94ff576a1f1f046ee87693930a12209e915a09f2a720c77d1357f3/diff:/var/lib/docker/overlay2/8875d87537d51e137e2fbff41fe61dadeefa46057e0c93d257ec4754b5ba95f3/diff:/var/lib/docker/overlay2/86e75ae4c708ad2e78d21326ca3b7f552b7903d8c7a0a66e086feaacd3a8c002/diff:/var/lib/docker/overlay2/c688ba4459332409ab260296b96d566d186d307b81308fb3b59724fe6fed3c7d/diff:/var/lib/d
ocker/overlay2/45c4b946a887c9de4f592591b3e65041083baba75d5bed9b0995339de1cafd02/diff:/var/lib/docker/overlay2/3927af3e4dc5e4af94ac76cbe5a3b9759b80e10646aabc655f5c9b56ae13e650/diff:/var/lib/docker/overlay2/11dd3efafe062edcbd4dd837545d570514492786b08abf77f57fe300d225eaa5/diff:/var/lib/docker/overlay2/b8e0f4d5369eb0c633092d9e21501d99ccb27fafaad6667209e1d988c337ae9b/diff:/var/lib/docker/overlay2/c605476b22b48b266163ca6ee5fcb52b15458f1d7a77458390109b6e795f72f3/diff:/var/lib/docker/overlay2/19b31873fbc8858f6b143ea8dc98572f32110c70c29732285bdea89dd6163287/diff:/var/lib/docker/overlay2/171afcaaccc199dc3a653d1bf42453959e7c6bd7bc1e8340f94454b5f5fb6d2d/diff:/var/lib/docker/overlay2/2000b0fa8d671b6e34ec28228440eb7e6a5ddcd6b2cb657836d8522e3fb4454e/diff:/var/lib/docker/overlay2/1294bf56384681b45b611fe07d7b32c6c87a67eee568452d1f8123bf319f7a4c/diff:/var/lib/docker/overlay2/3199bb762e90968c1998250099f781c0e8954f4d62df39a80a384b9208149421/diff:/var/lib/docker/overlay2/036a54cd701c84ee783285b30f6cf33b92858f1a11ebd5db15cffffea12
adcd9/diff:/var/lib/docker/overlay2/552a166f9d1f0d79a7c45c08db35c6db9366e5d8e1321b62ffc3d8a9ea458797/diff:/var/lib/docker/overlay2/98f34e7af675026bd6dd9aaf43b996b5a992e04fe33f238d16d38e7ccad2fb91/diff:/var/lib/docker/overlay2/c1ed97ca4514bd37713c82daa6ba805285f12b5aa51599e9f915b8755fbf55a7/diff:/var/lib/docker/overlay2/86b8ba00550442fc509be298ff07205e99a171d56a70a2e1735cdfd4a9ab745e/diff:/var/lib/docker/overlay2/109092c1424dfee38a4eb63e39a9bb5d3de4b011f84e30bd021bf73ee2f47aa1/diff:/var/lib/docker/overlay2/87fd60ac9e5dd5a701def2c0a9e1c64463bd4779be06a990c86e5561a4349312/diff:/var/lib/docker/overlay2/145545da7fd578da8a41adec4cec3a9b347b4e609c900ee9365ae7735c7859c6/diff:/var/lib/docker/overlay2/e6a20d33de71269404b7fdc1fa6f915dcbc5a337e0625ef4536cb942bb55c19b/diff:/var/lib/docker/overlay2/38829872ccac03d0dae45b93d1f34e61fd179b6fc19f22fd27d4431ee773090f/diff:/var/lib/docker/overlay2/3c7fa7cc3994b36d6b984ebe7c46b535829f02814130486c4ecf5a0976f4f7dc/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20211231102602-6736",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20211231102602-6736/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20211231102602-6736",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20211231102602-6736",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20211231102602-6736",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c8fb65b850e95d9291586c192e53e52d1c3afd0fdfabe6699d1eda53b3ac8da7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49387"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49386"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49383"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49385"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49384"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c8fb65b850e9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20211231102602-6736": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5984218b7d48",
	                        "old-k8s-version-20211231102602-6736"
	                    ],
	                    "NetworkID": "689da033f191c821bd60ad0334b0149b7450bc9a9e69f2e467eaea0327517488",
	                    "EndpointID": "a8e57908631bc9f44ae1933829bac6c4d6b691fb5425dc0680fa172c99243c35",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20211231102602-6736 -n old-k8s-version-20211231102602-6736
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20211231102602-6736 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-20211231102602-6736 logs -n 25: (1.150220718s)
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:30:59 UTC | Fri, 31 Dec 2021 10:31:00 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:30:53 UTC | Fri, 31 Dec 2021 10:31:13 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:31:13 UTC | Fri, 31 Dec 2021 10:31:13 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p no-preload-20211231102928-6736                          | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:31:13 UTC | Fri, 31 Dec 2021 10:32:11 UTC |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                |         |         |                               |                               |
	|         | --driver=docker                                            |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:22 UTC | Fri, 31 Dec 2021 10:32:22 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| pause   | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:22 UTC | Fri, 31 Dec 2021 10:32:23 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| -p      | enable-default-cni-20211231101406-6736                     | enable-default-cni-20211231101406-6736         | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:24 UTC | Fri, 31 Dec 2021 10:32:25 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| unpause | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:24 UTC | Fri, 31 Dec 2021 10:32:25 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| delete  | -p                                                         | enable-default-cni-20211231101406-6736         | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:26 UTC | Fri, 31 Dec 2021 10:32:29 UTC |
	|         | enable-default-cni-20211231101406-6736                     |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:26 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20211231103229-6736      | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:29 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | disable-driver-mounts-20211231103229-6736                  |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:30 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20211231103230-6736 --memory=2200            | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:30 UTC | Fri, 31 Dec 2021 10:33:37 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:37 UTC | Fri, 31 Dec 2021 10:33:38 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:38 UTC | Fri, 31 Dec 2021 10:33:58 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:58 UTC | Fri, 31 Dec 2021 10:33:58 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20211231103230-6736 --memory=2200            | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:58 UTC | Fri, 31 Dec 2021 10:34:51 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:52 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:53 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                            | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:53 UTC | Fri, 31 Dec 2021 10:34:54 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:54 UTC | Fri, 31 Dec 2021 10:34:55 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:56 UTC | Fri, 31 Dec 2021 10:34:59 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:59 UTC | Fri, 31 Dec 2021 10:35:00 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736             | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:37:37 UTC | Fri, 31 Dec 2021 10:37:38 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:02 UTC | Fri, 31 Dec 2021 10:39:03 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/31 10:33:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.17.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1231 10:33:58.889763  239842 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:33:58.889968  239842 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:33:58.890015  239842 out.go:310] Setting ErrFile to fd 2...
	I1231 10:33:58.890028  239842 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:33:58.890301  239842 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:33:58.890755  239842 out.go:304] Setting JSON to false
	I1231 10:33:58.892928  239842 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4593,"bootTime":1640942245,"procs":596,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:33:58.893046  239842 start.go:122] virtualization: kvm guest
	I1231 10:33:58.896075  239842 out.go:176] * [newest-cni-20211231103230-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:33:58.898770  239842 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:33:58.896425  239842 notify.go:174] Checking for updates...
	I1231 10:33:58.901377  239842 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:33:58.904292  239842 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:33:58.906743  239842 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:33:58.909823  239842 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:33:58.911269  239842 config.go:176] Loaded profile config "newest-cni-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2-rc.0
	I1231 10:33:58.911745  239842 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:33:58.960055  239842 docker.go:132] docker version: linux-20.10.12
	I1231 10:33:58.960175  239842 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:33:59.061340  239842 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:49 SystemTime:2021-12-31 10:33:58.994194285 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:33:59.061470  239842 docker.go:237] overlay module found
	I1231 10:33:59.064676  239842 out.go:176] * Using the docker driver based on existing profile
	I1231 10:33:59.064715  239842 start.go:280] selected driver: docker
	I1231 10:33:59.064721  239842 start.go:795] validating driver "docker" against &{Name:newest-cni-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2-rc.0 ClusterName:newest-cni-20211231103230-6736 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.2-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.
io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:33:59.064864  239842 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:33:59.064877  239842 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:33:59.064882  239842 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:33:59.064913  239842 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:33:59.064992  239842 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:33:59.067375  239842 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:33:59.068079  239842 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:33:59.179516  239842 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:49 SystemTime:2021-12-31 10:33:59.103117577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	W1231 10:33:59.179717  239842 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:33:59.179756  239842 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:33:59.182917  239842 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:33:59.183064  239842 start_flags.go:829] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1231 10:33:59.183104  239842 cni.go:93] Creating CNI manager for ""
	I1231 10:33:59.183116  239842 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:33:59.183124  239842 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1231 10:33:59.183133  239842 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1231 10:33:59.183142  239842 start_flags.go:298] config:
	{Name:newest-cni-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2-rc.0 ClusterName:newest-cni-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.2-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries
:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:33:59.186972  239842 out.go:176] * Starting control plane node newest-cni-20211231103230-6736 in cluster newest-cni-20211231103230-6736
	I1231 10:33:59.187046  239842 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:33:59.189137  239842 out.go:176] * Pulling base image ...
	I1231 10:33:59.189233  239842 preload.go:132] Checking if preload exists for k8s version v1.23.2-rc.0 and runtime containerd
	I1231 10:33:59.189311  239842 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.2-rc.0-containerd-overlay2-amd64.tar.lz4
	I1231 10:33:59.189340  239842 cache.go:57] Caching tarball of preloaded images
	I1231 10:33:59.189397  239842 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:33:59.189738  239842 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.2-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:33:59.189762  239842 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.2-rc.0 on containerd
	I1231 10:33:59.189945  239842 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/config.json ...
	I1231 10:33:59.232549  239842 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:33:59.232610  239842 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:33:59.232633  239842 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:33:59.232677  239842 start.go:313] acquiring machines lock for newest-cni-20211231103230-6736: {Name:mkea4a41968f23a7f754ed1625a06fab4a3434ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:33:59.232826  239842 start.go:317] acquired machines lock for "newest-cni-20211231103230-6736" in 116.689µs
	I1231 10:33:59.232869  239842 start.go:93] Skipping create...Using existing machine configuration
	I1231 10:33:59.232883  239842 fix.go:55] fixHost starting: 
	I1231 10:33:59.233271  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:33:59.275589  239842 fix.go:108] recreateIfNeeded on newest-cni-20211231103230-6736: state=Stopped err=<nil>
	W1231 10:33:59.275624  239842 fix.go:134] unexpected machine state, will restart: <nil>
	I1231 10:33:57.611681  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:00.110928  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:33:58.633918  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:00.634303  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:03.134303  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:59.278797  239842 out.go:176] * Restarting existing docker container for "newest-cni-20211231103230-6736" ...
	I1231 10:33:59.278893  239842 cli_runner.go:133] Run: docker start newest-cni-20211231103230-6736
	I1231 10:33:59.808917  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:33:59.856009  239842 kic.go:420] container "newest-cni-20211231103230-6736" state is running.
	I1231 10:33:59.856565  239842 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20211231103230-6736
	I1231 10:33:59.904405  239842 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/config.json ...
	I1231 10:33:59.904680  239842 machine.go:88] provisioning docker machine ...
	I1231 10:33:59.904703  239842 ubuntu.go:169] provisioning hostname "newest-cni-20211231103230-6736"
	I1231 10:33:59.904740  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:33:59.947914  239842 main.go:130] libmachine: Using SSH client type: native
	I1231 10:33:59.948105  239842 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I1231 10:33:59.948124  239842 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20211231103230-6736 && echo "newest-cni-20211231103230-6736" | sudo tee /etc/hostname
	I1231 10:33:59.949036  239842 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47278->127.0.0.1:49417: read: connection reset by peer
	I1231 10:34:03.099652  239842 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20211231103230-6736
	
	I1231 10:34:03.099755  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:03.144059  239842 main.go:130] libmachine: Using SSH client type: native
	I1231 10:34:03.144255  239842 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I1231 10:34:03.144303  239842 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20211231103230-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20211231103230-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20211231103230-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:34:03.284958  239842 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:34:03.284998  239842 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:34:03.285062  239842 ubuntu.go:177] setting up certificates
	I1231 10:34:03.285076  239842 provision.go:83] configureAuth start
	I1231 10:34:03.285144  239842 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20211231103230-6736
	I1231 10:34:03.331319  239842 provision.go:138] copyHostCerts
	I1231 10:34:03.331385  239842 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:34:03.331393  239842 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:34:03.331460  239842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:34:03.331544  239842 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:34:03.331558  239842 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:34:03.331579  239842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:34:03.331625  239842 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:34:03.331638  239842 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:34:03.331657  239842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:34:03.331695  239842 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20211231103230-6736 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20211231103230-6736]
	I1231 10:34:03.586959  239842 provision.go:172] copyRemoteCerts
	I1231 10:34:03.587049  239842 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:34:03.587091  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:03.625102  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:03.720644  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:34:03.740753  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I1231 10:34:03.760615  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1231 10:34:03.780213  239842 provision.go:86] duration metric: configureAuth took 495.114028ms
	I1231 10:34:03.780262  239842 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:34:03.780481  239842 config.go:176] Loaded profile config "newest-cni-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2-rc.0
	I1231 10:34:03.780495  239842 machine.go:91] provisioned docker machine in 3.875801286s
	I1231 10:34:03.780501  239842 start.go:267] post-start starting for "newest-cni-20211231103230-6736" (driver="docker")
	I1231 10:34:03.780506  239842 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:34:03.780545  239842 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:34:03.780586  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:02.111051  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:04.112682  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:05.633463  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:08.133359  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:03.820417  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:03.925875  239842 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:34:03.930813  239842 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:34:03.930851  239842 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:34:03.930864  239842 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:34:03.930872  239842 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:34:03.930885  239842 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:34:03.930949  239842 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:34:03.931028  239842 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:34:03.931126  239842 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:34:03.940439  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:34:03.962877  239842 start.go:270] post-start completed in 182.361292ms
	I1231 10:34:03.962946  239842 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:34:03.962978  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:04.003202  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:04.101581  239842 fix.go:57] fixHost completed within 4.868692803s
	I1231 10:34:04.101619  239842 start.go:80] releasing machines lock for "newest-cni-20211231103230-6736", held for 4.868770707s
	I1231 10:34:04.101715  239842 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20211231103230-6736
	I1231 10:34:04.141381  239842 ssh_runner.go:195] Run: systemctl --version
	I1231 10:34:04.141446  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:04.141386  239842 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:34:04.141549  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:04.184922  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:04.186345  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:04.300405  239842 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:34:04.314857  239842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:34:04.326678  239842 docker.go:158] disabling docker service ...
	I1231 10:34:04.326732  239842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:34:04.338952  239842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:34:04.350188  239842 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:34:04.441879  239842 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:34:04.523149  239842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:34:04.533774  239842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:34:04.548558  239842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:34:04.563078  239842 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:34:04.570184  239842 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:34:04.577904  239842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:34:04.661512  239842 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:34:04.743598  239842 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:34:04.743665  239842 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:34:04.748322  239842 start.go:458] Will wait 60s for crictl version
	I1231 10:34:04.748376  239842 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:34:04.776556  239842 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:34:04Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:34:06.611941  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:09.111247  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:10.633648  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:13.133617  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:11.111440  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:13.111520  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:15.111758  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:15.824426  239842 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:34:15.852646  239842 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:34:15.852709  239842 ssh_runner.go:195] Run: containerd --version
	I1231 10:34:15.875454  239842 ssh_runner.go:195] Run: containerd --version
	I1231 10:34:15.899983  239842 out.go:176] * Preparing Kubernetes v1.23.2-rc.0 on containerd 1.4.12 ...
	I1231 10:34:15.900089  239842 cli_runner.go:133] Run: docker network inspect newest-cni-20211231103230-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:34:15.938827  239842 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1231 10:34:15.942919  239842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:34:15.956827  239842 out.go:176]   - kubelet.network-plugin=cni
	I1231 10:34:15.959219  239842 out.go:176]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I1231 10:34:15.961312  239842 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:34:15.963727  239842 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:34:15.632937  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:18.132486  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:15.966056  239842 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:34:15.966172  239842 preload.go:132] Checking if preload exists for k8s version v1.23.2-rc.0 and runtime containerd
	I1231 10:34:15.966243  239842 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:34:15.995265  239842 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:34:15.995290  239842 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:34:15.995331  239842 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:34:16.022435  239842 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:34:16.022458  239842 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:34:16.022504  239842 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:34:16.051296  239842 cni.go:93] Creating CNI manager for ""
	I1231 10:34:16.051321  239842 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:34:16.051335  239842 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I1231 10:34:16.051348  239842 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.2-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20211231103230-6736 NodeName:newest-cni-20211231103230-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-
elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:34:16.051545  239842 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20211231103230-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.2-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:34:16.051662  239842 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.2-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --global-housekeeping-interval=60m --hostname-override=newest-cni-20211231103230-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.2-rc.0 ClusterName:newest-cni-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1231 10:34:16.051732  239842 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2-rc.0
	I1231 10:34:16.059993  239842 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:34:16.060063  239842 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:34:16.068090  239842 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (679 bytes)
	I1231 10:34:16.083548  239842 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1231 10:34:16.098901  239842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1231 10:34:16.116150  239842 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:34:16.119950  239842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:34:16.130712  239842 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736 for IP: 192.168.76.2
	I1231 10:34:16.130826  239842 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:34:16.130889  239842 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:34:16.130980  239842 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/client.key
	I1231 10:34:16.131059  239842 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/apiserver.key.31bdca25
	I1231 10:34:16.131233  239842 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/proxy-client.key
	I1231 10:34:16.131373  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:34:16.131415  239842 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:34:16.131431  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:34:16.131463  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:34:16.131498  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:34:16.131533  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:34:16.131586  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:34:16.132861  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:34:16.154694  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1231 10:34:16.176000  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:34:16.199525  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1231 10:34:16.222324  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:34:16.245022  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:34:16.269767  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:34:16.296529  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:34:16.320759  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:34:16.344786  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:34:16.368318  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:34:16.389773  239842 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:34:16.405384  239842 ssh_runner.go:195] Run: openssl version
	I1231 10:34:16.411011  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:34:16.419791  239842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:34:16.423595  239842 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:34:16.423671  239842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:34:16.429324  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:34:16.437405  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:34:16.446166  239842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:34:16.449927  239842 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:34:16.450001  239842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:34:16.455858  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:34:16.464618  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:34:16.474030  239842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:34:16.478705  239842 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:34:16.478802  239842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:34:16.485176  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:34:16.493913  239842 kubeadm.go:388] StartCluster: {Name:newest-cni-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2-rc.0 ClusterName:newest-cni-20211231103230-6736 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.2-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 Me
tricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:34:16.494024  239842 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:34:16.494078  239842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:34:16.524647  239842 cri.go:87] found id: "155c2bf79c8bd8b8bb6dbeb56755a02c58b466899f6a9e677748a3b0d10686ed"
	I1231 10:34:16.524684  239842 cri.go:87] found id: "8d8c36be3cd9499af063e0e758f58669f30330492f2849aca5442c85468a63bd"
	I1231 10:34:16.524693  239842 cri.go:87] found id: "24d8438942a01eb4000995dcc71ad4b52b67206a5f4af1954e644740df495c62"
	I1231 10:34:16.524705  239842 cri.go:87] found id: "3c9717f9388efe5a2acdad99a248f83aeb684c526c2e02b91a89fd56616cb240"
	I1231 10:34:16.524711  239842 cri.go:87] found id: "f3c21531b1b800501ebef6dcb786a58ec6fe912c6da1b160f1d0589524631a5f"
	I1231 10:34:16.524717  239842 cri.go:87] found id: "f1cfa511a7febb69b44502241a766bc5f1da2755e54f022d08281b3b3c4551ee"
	I1231 10:34:16.524722  239842 cri.go:87] found id: ""
	I1231 10:34:16.524779  239842 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1231 10:34:16.541313  239842 cri.go:114] JSON = null
	W1231 10:34:16.541371  239842 kubeadm.go:395] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I1231 10:34:16.541451  239842 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:34:16.549111  239842 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1231 10:34:16.556666  239842 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:16.557897  239842 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20211231103230-6736" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:34:16.558554  239842 kubeconfig.go:127] "newest-cni-20211231103230-6736" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig - will repair!
	I1231 10:34:16.559659  239842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:34:16.562421  239842 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1231 10:34:16.570276  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:16.570337  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:16.585156  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:16.785503  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:16.785582  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:16.803739  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:16.986016  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:16.986082  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:17.003346  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:17.185436  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:17.185522  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:17.201434  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:17.385753  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:17.385834  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:17.401272  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:17.585464  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:17.585557  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:17.602009  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:17.786319  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:17.786390  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:17.801129  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:17.985405  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:17.985492  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:18.002220  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:18.185431  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:18.185522  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:18.200513  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:18.385798  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:18.385886  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:18.404508  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:18.585753  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:18.585841  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:18.601638  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:18.785922  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:18.786018  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1231 10:34:17.611312  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:20.111421  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	W1231 10:34:18.803740  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:18.985967  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:18.986067  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:19.002808  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:19.186110  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:19.186208  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:19.202949  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:19.386242  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:19.386333  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:19.403965  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:19.586311  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:19.586409  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:19.602884  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:19.602907  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:19.602960  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:19.619190  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W1231 10:34:19.619231  239842 kubeadm.go:600] needs reconfigure: apiserver error: timed out waiting for the condition
	I1231 10:34:19.619258  239842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2-rc.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1231 10:34:20.346752  239842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:34:20.358199  239842 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:34:20.367408  239842 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:34:20.367462  239842 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:34:20.377071  239842 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:34:20.377137  239842 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:34:20.729691  239842 out.go:203]   - Generating certificates and keys ...
	I1231 10:34:20.132812  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:22.133593  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:21.380175  239842 out.go:203]   - Booting up control plane ...
	I1231 10:34:22.111520  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:24.613409  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:24.633597  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:27.134575  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:27.111697  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:29.610920  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:29.633090  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:31.633767  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:34.434058  239842 out.go:203]   - Configuring RBAC rules ...
	I1231 10:34:34.849120  239842 cni.go:93] Creating CNI manager for ""
	I1231 10:34:34.849163  239842 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:34:31.611280  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:34.111498  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:34.133525  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:36.632536  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:34.852842  239842 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:34:34.852967  239842 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:34:34.857479  239842 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl ...
	I1231 10:34:34.857506  239842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:34:34.874366  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:34:35.630846  239842 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:34:35.630937  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:35.630961  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=newest-cni-20211231103230-6736 minikube.k8s.io/updated_at=2021_12_31T10_34_35_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:35.651672  239842 ops.go:34] apiserver oom_adj: -16
	I1231 10:34:35.719909  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:36.316492  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:36.815787  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:37.316079  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:37.816415  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:38.316091  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:36.610865  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:38.611394  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:38.632694  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:40.634440  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:43.133415  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:38.815995  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:39.316530  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:39.816139  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:40.315766  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:40.815697  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:41.316307  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:41.816101  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:42.316374  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:42.816315  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:43.316484  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:40.611615  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:42.612982  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:45.111363  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:45.633500  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:48.131867  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:43.816729  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:44.316037  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:44.815895  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:45.317082  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:45.816101  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:46.315798  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:46.815886  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:47.316269  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:47.815962  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:48.003966  239842 kubeadm.go:864] duration metric: took 12.373088498s to wait for elevateKubeSystemPrivileges.
	I1231 10:34:48.003999  239842 kubeadm.go:390] StartCluster complete in 31.510097342s
	I1231 10:34:48.004022  239842 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:34:48.004121  239842 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:34:48.005914  239842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:34:48.526056  239842 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20211231103230-6736" rescaled to 1
	I1231 10:34:48.526130  239842 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.2-rc.0 ControlPlane:true Worker:true}
	I1231 10:34:48.526152  239842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:34:48.528894  239842 out.go:176] * Verifying Kubernetes components...
	I1231 10:34:48.528963  239842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:34:48.526213  239842 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I1231 10:34:48.529044  239842 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20211231103230-6736"
	I1231 10:34:48.529069  239842 addons.go:65] Setting dashboard=true in profile "newest-cni-20211231103230-6736"
	I1231 10:34:48.529085  239842 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20211231103230-6736"
	I1231 10:34:48.529105  239842 addons.go:65] Setting metrics-server=true in profile "newest-cni-20211231103230-6736"
	I1231 10:34:48.529112  239842 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20211231103230-6736"
	I1231 10:34:48.529125  239842 addons.go:153] Setting addon metrics-server=true in "newest-cni-20211231103230-6736"
	W1231 10:34:48.529134  239842 addons.go:165] addon metrics-server should already be in state true
	I1231 10:34:48.529072  239842 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20211231103230-6736"
	I1231 10:34:48.529172  239842 host.go:66] Checking if "newest-cni-20211231103230-6736" exists ...
	W1231 10:34:48.529178  239842 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:34:48.529204  239842 host.go:66] Checking if "newest-cni-20211231103230-6736" exists ...
	I1231 10:34:48.529086  239842 addons.go:153] Setting addon dashboard=true in "newest-cni-20211231103230-6736"
	W1231 10:34:48.529469  239842 addons.go:165] addon dashboard should already be in state true
	I1231 10:34:48.529489  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:34:48.529507  239842 host.go:66] Checking if "newest-cni-20211231103230-6736" exists ...
	I1231 10:34:48.529669  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:34:48.529673  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:34:48.526424  239842 config.go:176] Loaded profile config "newest-cni-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2-rc.0
	I1231 10:34:48.530032  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:34:48.586829  239842 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1231 10:34:48.590085  239842 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1231 10:34:48.589138  239842 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20211231103230-6736"
	W1231 10:34:48.590194  239842 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:34:48.590280  239842 host.go:66] Checking if "newest-cni-20211231103230-6736" exists ...
	I1231 10:34:48.590345  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1231 10:34:48.590360  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1231 10:34:48.590419  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:48.593339  239842 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1231 10:34:48.593453  239842 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1231 10:34:48.593465  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1231 10:34:48.593553  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:48.591124  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:34:48.600479  239842 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:34:48.601216  239842 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:34:48.601358  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:34:48.601494  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:48.659910  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:48.665029  239842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:34:48.665195  239842 api_server.go:51] waiting for apiserver process to appear ...
	I1231 10:34:48.665258  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1231 10:34:48.667125  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:48.679789  239842 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:34:48.679853  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:34:48.679963  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:48.686805  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:48.739096  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:47.111713  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:49.112487  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:48.902948  239842 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1231 10:34:48.902984  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1231 10:34:48.903155  239842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:34:48.903256  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1231 10:34:48.903281  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1231 10:34:49.003786  239842 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1231 10:34:49.003825  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1231 10:34:49.006836  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1231 10:34:49.006870  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1231 10:34:49.007822  239842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:34:49.095461  239842 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:34:49.095567  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1231 10:34:49.102715  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1231 10:34:49.102747  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1231 10:34:49.206111  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1231 10:34:49.206155  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1231 10:34:49.207830  239842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:34:49.384988  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1231 10:34:49.385020  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1231 10:34:49.392682  239842 api_server.go:71] duration metric: took 866.519409ms to wait for apiserver process to appear ...
	I1231 10:34:49.392800  239842 api_server.go:87] waiting for apiserver healthz status ...
	I1231 10:34:49.392828  239842 start.go:773] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I1231 10:34:49.392831  239842 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1231 10:34:49.403071  239842 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1231 10:34:49.404217  239842 api_server.go:140] control plane version: v1.23.2-rc.0
	I1231 10:34:49.404287  239842 api_server.go:130] duration metric: took 11.464686ms to wait for apiserver health ...
	I1231 10:34:49.404308  239842 system_pods.go:43] waiting for kube-system pods to appear ...
	I1231 10:34:49.488542  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1231 10:34:49.488611  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1231 10:34:49.497619  239842 system_pods.go:59] 7 kube-system pods found
	I1231 10:34:49.497777  239842 system_pods.go:61] "coredns-64897985d-fh6sl" [f7a107d1-df7c-4b28-8f0d-eb5e6da38e4f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I1231 10:34:49.497853  239842 system_pods.go:61] "etcd-newest-cni-20211231103230-6736" [9212b001-2098-4378-b03f-05510269335f] Running
	I1231 10:34:49.497879  239842 system_pods.go:61] "kindnet-tkvfw" [4abcfbc0-b4e3-41f0-89d4-26c9a356f41e] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1231 10:34:49.497904  239842 system_pods.go:61] "kube-apiserver-newest-cni-20211231103230-6736" [a37117a3-3753-4e5b-b2f3-5d129612ee51] Running
	I1231 10:34:49.497934  239842 system_pods.go:61] "kube-controller-manager-newest-cni-20211231103230-6736" [bae63363-f3ff-4de8-9600-07baeb9a1915] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1231 10:34:49.497959  239842 system_pods.go:61] "kube-proxy-228gt" [8d98d417-a803-474a-b07d-aa7c25391bd9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1231 10:34:49.497987  239842 system_pods.go:61] "kube-scheduler-newest-cni-20211231103230-6736" [1a0e5162-834e-4b7f-815a-66f6b1511153] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1231 10:34:49.498011  239842 system_pods.go:74] duration metric: took 93.696354ms to wait for pod list to return data ...
	I1231 10:34:49.498038  239842 default_sa.go:34] waiting for default service account to be created ...
	I1231 10:34:49.501869  239842 default_sa.go:45] found service account: "default"
	I1231 10:34:49.501947  239842 default_sa.go:55] duration metric: took 3.889685ms for default service account to be created ...
	I1231 10:34:49.501972  239842 kubeadm.go:542] duration metric: took 975.813843ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1231 10:34:49.502004  239842 node_conditions.go:102] verifying NodePressure condition ...
	I1231 10:34:49.584377  239842 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I1231 10:34:49.584456  239842 node_conditions.go:123] node cpu capacity is 8
	I1231 10:34:49.584501  239842 node_conditions.go:105] duration metric: took 82.480868ms to run NodePressure ...
	I1231 10:34:49.584516  239842 start.go:211] waiting for startup goroutines ...
	I1231 10:34:49.590387  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1231 10:34:49.590472  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1231 10:34:49.687079  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1231 10:34:49.687111  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1231 10:34:49.786712  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:34:49.786813  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1231 10:34:49.897022  239842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:34:50.206491  239842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.303301914s)
	I1231 10:34:50.206645  239842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.198797907s)
	I1231 10:34:50.596403  239842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.388529563s)
	I1231 10:34:50.596444  239842 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20211231103230-6736"
	I1231 10:34:51.611369  239842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.714292146s)
	I1231 10:34:51.613756  239842 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1231 10:34:51.613803  239842 addons.go:417] enableAddons completed in 3.087611947s
	I1231 10:34:51.651587  239842 start.go:493] kubectl: 1.23.1, cluster: 1.23.2-rc.0 (minor skew: 0)
	I1231 10:34:51.654312  239842 out.go:176] * Done! kubectl is now configured to use "newest-cni-20211231103230-6736" cluster and "default" namespace by default
	I1231 10:34:50.132740  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:52.134471  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:53.136634  219726 node_ready.go:38] duration metric: took 4m0.01292471s waiting for node "embed-certs-20211231102953-6736" to be "Ready" ...
	I1231 10:34:53.140709  219726 out.go:176] 
	W1231 10:34:53.140941  219726 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:34:53.140961  219726 out.go:241] * 
	W1231 10:34:53.141727  219726 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1231 10:34:51.611136  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:53.611579  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:56.111245  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:58.111520  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:00.610615  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:02.611131  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:04.611362  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:06.611510  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:09.110821  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:11.610753  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:13.611463  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:16.111356  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:18.611389  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:21.111536  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:23.610710  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:25.611361  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:28.111622  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:30.610549  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:32.610684  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:34.612116  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:37.111767  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:39.611741  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:42.111589  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:44.611029  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:47.111644  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:49.611065  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:52.111090  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:54.111164  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:56.611524  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:59.112071  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:01.610398  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:03.611163  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:06.111054  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:08.610990  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:10.611355  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:13.111940  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:15.610865  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:17.611051  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:20.111559  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:22.611443  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:25.110897  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:27.111536  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:29.111842  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:31.611346  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:34.111760  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:36.611457  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:38.611614  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:40.611681  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:43.111250  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:45.111518  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:47.611374  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:50.111716  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:52.611710  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:55.111552  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:57.611111  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:59.611292  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:01.611646  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:04.112863  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:06.611407  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:08.611765  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:11.111509  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:13.112579  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:15.611537  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:17.611860  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:20.111361  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:22.611306  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:24.611757  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:26.611944  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:29.112062  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:31.611278  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:34.110991  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:36.611232  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:36.613526  232840 node_ready.go:38] duration metric: took 4m0.012005111s waiting for node "default-k8s-different-port-20211231103230-6736" to be "Ready" ...
	I1231 10:37:36.616503  232840 out.go:176] 
	W1231 10:37:36.616735  232840 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:37:36.616764  232840 out.go:241] * 
	W1231 10:37:36.617727  232840 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	9711ffb10b897       6de166512aa22       3 minutes ago       Exited              kindnet-cni               3                   04b659f964be5
	91f9570ac5962       c21b0c7400f98       12 minutes ago      Running             kube-proxy                0                   6f185cd7b6c56
	090a101afa0e5       b2756210eeabf       12 minutes ago      Running             etcd                      0                   295e37d445215
	a0fea282c2cab       b305571ca60a5       12 minutes ago      Running             kube-apiserver            0                   410795c7cb2b8
	fddc6f96e1ab6       301ddc62b80b1       12 minutes ago      Running             kube-scheduler            0                   e70ba3548c048
	c5161903fa798       06a629a7e51cd       12 minutes ago      Running             kube-controller-manager   0                   3badc9a2068b0
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-12-31 10:26:14 UTC, end at Fri 2021-12-31 10:39:05 UTC. --
	Dec 31 10:32:23 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:32:23.121007989Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:32:23 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:32:23.693445401Z" level=info msg="RemoveContainer for \"f3c41a92f3dcb0d899b68d0a3105a9d8038905ebe927dc4ccaa705e9c46d75e7\""
	Dec 31 10:32:23 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:32:23.699838948Z" level=info msg="RemoveContainer for \"f3c41a92f3dcb0d899b68d0a3105a9d8038905ebe927dc4ccaa705e9c46d75e7\" returns successfully"
	Dec 31 10:32:36 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:32:36.028187479Z" level=info msg="CreateContainer within sandbox \"04b659f964be5884b7c46b53d59c3b0a04d83f2ed13fcf0dbb6db8356fd79926\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Dec 31 10:32:50 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:32:50.194577290Z" level=info msg="CreateContainer within sandbox \"04b659f964be5884b7c46b53d59c3b0a04d83f2ed13fcf0dbb6db8356fd79926\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"8649e88571b04b9017abf629d51e5fc35bbc02554483100f52efe255275e0b71\""
	Dec 31 10:32:50 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:32:50.195369432Z" level=info msg="StartContainer for \"8649e88571b04b9017abf629d51e5fc35bbc02554483100f52efe255275e0b71\""
	Dec 31 10:32:50 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:32:50.406751724Z" level=info msg="StartContainer for \"8649e88571b04b9017abf629d51e5fc35bbc02554483100f52efe255275e0b71\" returns successfully"
	Dec 31 10:35:30 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:35:30.697424396Z" level=info msg="Finish piping stderr of container \"8649e88571b04b9017abf629d51e5fc35bbc02554483100f52efe255275e0b71\""
	Dec 31 10:35:30 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:35:30.697545851Z" level=info msg="Finish piping stdout of container \"8649e88571b04b9017abf629d51e5fc35bbc02554483100f52efe255275e0b71\""
	Dec 31 10:35:30 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:35:30.698495474Z" level=info msg="TaskExit event &TaskExit{ContainerID:8649e88571b04b9017abf629d51e5fc35bbc02554483100f52efe255275e0b71,ID:8649e88571b04b9017abf629d51e5fc35bbc02554483100f52efe255275e0b71,Pid:3149,ExitStatus:2,ExitedAt:2021-12-31 10:35:30.698074741 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:35:30 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:35:30.727245760Z" level=info msg="shim disconnected" id=8649e88571b04b9017abf629d51e5fc35bbc02554483100f52efe255275e0b71
	Dec 31 10:35:30 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:35:30.727370050Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:35:30 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:35:30.997100704Z" level=info msg="RemoveContainer for \"56e208454f91938e954714cf0c234ee41095d874aaba052186b5db7c08821fa3\""
	Dec 31 10:35:31 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:35:31.004566807Z" level=info msg="RemoveContainer for \"56e208454f91938e954714cf0c234ee41095d874aaba052186b5db7c08821fa3\" returns successfully"
	Dec 31 10:35:57 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:35:57.025726225Z" level=info msg="CreateContainer within sandbox \"04b659f964be5884b7c46b53d59c3b0a04d83f2ed13fcf0dbb6db8356fd79926\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Dec 31 10:35:57 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:35:57.048117919Z" level=info msg="CreateContainer within sandbox \"04b659f964be5884b7c46b53d59c3b0a04d83f2ed13fcf0dbb6db8356fd79926\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"9711ffb10b897aecfb60ff957702f91f87cdd75e0701725914ee129e8c6799cb\""
	Dec 31 10:35:57 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:35:57.048735700Z" level=info msg="StartContainer for \"9711ffb10b897aecfb60ff957702f91f87cdd75e0701725914ee129e8c6799cb\""
	Dec 31 10:35:57 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:35:57.203719628Z" level=info msg="StartContainer for \"9711ffb10b897aecfb60ff957702f91f87cdd75e0701725914ee129e8c6799cb\" returns successfully"
	Dec 31 10:38:37 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:38:37.502138131Z" level=info msg="Finish piping stderr of container \"9711ffb10b897aecfb60ff957702f91f87cdd75e0701725914ee129e8c6799cb\""
	Dec 31 10:38:37 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:38:37.502229882Z" level=info msg="Finish piping stdout of container \"9711ffb10b897aecfb60ff957702f91f87cdd75e0701725914ee129e8c6799cb\""
	Dec 31 10:38:37 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:38:37.503021538Z" level=info msg="TaskExit event &TaskExit{ContainerID:9711ffb10b897aecfb60ff957702f91f87cdd75e0701725914ee129e8c6799cb,ID:9711ffb10b897aecfb60ff957702f91f87cdd75e0701725914ee129e8c6799cb,Pid:3601,ExitStatus:2,ExitedAt:2021-12-31 10:38:37.50263657 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:38:37 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:38:37.533047670Z" level=info msg="shim disconnected" id=9711ffb10b897aecfb60ff957702f91f87cdd75e0701725914ee129e8c6799cb
	Dec 31 10:38:37 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:38:37.533166582Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:38:38 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:38:38.279424060Z" level=info msg="RemoveContainer for \"8649e88571b04b9017abf629d51e5fc35bbc02554483100f52efe255275e0b71\""
	Dec 31 10:38:38 old-k8s-version-20211231102602-6736 containerd[470]: time="2021-12-31T10:38:38.286244909Z" level=info msg="RemoveContainer for \"8649e88571b04b9017abf629d51e5fc35bbc02554483100f52efe255275e0b71\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20211231102602-6736
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20211231102602-6736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb
	                    minikube.k8s.io/name=old-k8s-version-20211231102602-6736
	                    minikube.k8s.io/updated_at=2021_12_31T10_26_42_0700
	                    minikube.k8s.io/version=v1.24.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 Dec 2021 10:26:37 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 Dec 2021 10:38:38 +0000   Fri, 31 Dec 2021 10:26:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 Dec 2021 10:38:38 +0000   Fri, 31 Dec 2021 10:26:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 Dec 2021 10:38:38 +0000   Fri, 31 Dec 2021 10:26:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 31 Dec 2021 10:38:38 +0000   Fri, 31 Dec 2021 10:26:33 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    old-k8s-version-20211231102602-6736
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32879780Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32879780Ki
	 pods:               110
	System Info:
	 Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	 System UUID:                5a8cca94-3bdf-4013-adda-72ef27798431
	 Boot ID:                    e6f4bca3-a61e-4e63-a68d-0c73a5c2c999
	 Kernel Version:             5.11.0-1023-gcp
	 OS Image:                   Ubuntu 20.04.2 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.4.12
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-20211231102602-6736                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kindnet-gjbqc                                                  100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                kube-apiserver-old-k8s-version-20211231102602-6736             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-controller-manager-old-k8s-version-20211231102602-6736    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-proxy-hdtr6                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-scheduler-old-k8s-version-20211231102602-6736             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                             Message
	  ----    ------                   ----               ----                                             -------
	  Normal  Starting                 12m                kubelet, old-k8s-version-20211231102602-6736     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet, old-k8s-version-20211231102602-6736     Node old-k8s-version-20211231102602-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet, old-k8s-version-20211231102602-6736     Node old-k8s-version-20211231102602-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet, old-k8s-version-20211231102602-6736     Node old-k8s-version-20211231102602-6736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet, old-k8s-version-20211231102602-6736     Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kube-proxy, old-k8s-version-20211231102602-6736  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 da bc c6 d3 d8 08 06
	[  +0.905080] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vetha337dd59
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 d5 c5 d8 9b b7 08 06
	[  +0.767220] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethad0df3e8
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 45 74 46 b2 ed 08 06
	[  +0.175235] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethfa30cc3c
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 f5 8d 27 39 52 08 06
	[  +2.267937] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.022406] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.024047] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.951751] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.011878] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023951] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.963808] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.003832] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023918] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	
	* 
	* ==> etcd [090a101afa0e5be4c178038538c1438ae269f1339bb853fc4beb2973fd8f69c6] <==
	* 2021-12-31 10:26:33.403290 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-12-31 10:26:33.404899 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-12-31 10:26:33.405089 I | embed: listening for metrics on http://192.168.49.2:2381
	2021-12-31 10:26:33.405340 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-12-31 10:26:34.390892 I | raft: aec36adc501070cc is starting a new election at term 1
	2021-12-31 10:26:34.390937 I | raft: aec36adc501070cc became candidate at term 2
	2021-12-31 10:26:34.390955 I | raft: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	2021-12-31 10:26:34.390970 I | raft: aec36adc501070cc became leader at term 2
	2021-12-31 10:26:34.390978 I | raft: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-12-31 10:26:34.391151 I | etcdserver: setting up the initial cluster version to 3.3
	2021-12-31 10:26:34.392751 N | etcdserver/membership: set the initial cluster version to 3.3
	2021-12-31 10:26:34.392798 I | etcdserver/api: enabled capabilities for version 3.3
	2021-12-31 10:26:34.392812 I | embed: ready to serve client requests
	2021-12-31 10:26:34.392846 I | etcdserver: published {Name:old-k8s-version-20211231102602-6736 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-12-31 10:26:34.392895 I | embed: ready to serve client requests
	2021-12-31 10:26:34.395926 I | embed: serving client requests on 127.0.0.1:2379
	2021-12-31 10:26:34.396034 I | embed: serving client requests on 192.168.49.2:2379
	2021-12-31 10:29:41.263024 W | etcdserver: request "header:<ID:8128010034796901496 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.2\" mod_revision:446 > success:<request_put:<key:\"/registry/masterleases/192.168.49.2\" value_size:67 lease:8128010034796901494 >> failure:<request_range:<key:\"/registry/masterleases/192.168.49.2\" > >>" with result "size:16" took too long (169.430776ms) to execute
	2021-12-31 10:29:41.263180 W | etcdserver: read-only range request "key:\"/registry/jobs\" range_end:\"/registry/jobt\" count_only:true " with result "range_response_count:0 size:5" took too long (170.047688ms) to execute
	2021-12-31 10:29:41.361952 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20211231102602-6736\" " with result "range_response_count:1 size:3551" took too long (245.828121ms) to execute
	2021-12-31 10:29:41.569861 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:215" took too long (305.209221ms) to execute
	2021-12-31 10:29:56.808390 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20211231102602-6736\" " with result "range_response_count:1 size:3551" took too long (191.596053ms) to execute
	2021-12-31 10:32:38.401363 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:1 size:799" took too long (109.529951ms) to execute
	2021-12-31 10:36:34.414815 I | mvcc: store.index: compact 484
	2021-12-31 10:36:34.415894 I | mvcc: finished scheduled compaction at 484 (took 593.306µs)
	
	* 
	* ==> kernel <==
	*  10:39:05 up  1:21,  0 users,  load average: 0.93, 1.36, 2.16
	Linux old-k8s-version-20211231102602-6736 5.11.0-1023-gcp #25~20.04.1-Ubuntu SMP Mon Nov 15 15:54:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [a0fea282c2cab22ac98fca2724b604a81ad02188a7148a610d923ce9704541fe] <==
	* I1231 10:26:37.667487       1 customresource_discovery_controller.go:208] Starting DiscoveryController
	I1231 10:26:37.667494       1 naming_controller.go:288] Starting NamingConditionController
	I1231 10:26:37.667500       1 establishing_controller.go:73] Starting EstablishingController
	E1231 10:26:37.691341       1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1231 10:26:37.778842       1 cache.go:39] Caches are synced for autoregister controller
	I1231 10:26:37.778919       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I1231 10:26:37.779139       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1231 10:26:37.779195       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1231 10:26:38.665987       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I1231 10:26:38.666024       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1231 10:26:38.666037       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1231 10:26:38.670056       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I1231 10:26:38.673207       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I1231 10:26:38.673236       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1231 10:26:40.447719       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1231 10:26:40.727470       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1231 10:26:41.010135       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1231 10:26:41.011069       1 controller.go:606] quota admission added evaluator for: endpoints
	I1231 10:26:41.096654       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1231 10:26:41.899110       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1231 10:26:42.121597       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1231 10:26:42.438856       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1231 10:26:57.699475       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1231 10:26:57.711823       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I1231 10:26:57.751097       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [c5161903fa79820ba4aac6aae4e2aa2335944ccae08a80bec50f7a09bcb290a0] <==
	* I1231 10:26:57.615461       1 shared_informer.go:204] Caches are synced for PV protection 
	I1231 10:26:57.656384       1 shared_informer.go:204] Caches are synced for persistent volume 
	I1231 10:26:57.682073       1 shared_informer.go:204] Caches are synced for ReplicaSet 
	I1231 10:26:57.697117       1 shared_informer.go:204] Caches are synced for deployment 
	I1231 10:26:57.700183       1 shared_informer.go:204] Caches are synced for attach detach 
	I1231 10:26:57.702252       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"c8554f51-52fd-4f6a-8e2b-35d79db7d7fa", APIVersion:"apps/v1", ResourceVersion:"201", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 2
	I1231 10:26:57.703189       1 shared_informer.go:204] Caches are synced for expand 
	I1231 10:26:57.709182       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"8967477e-14f1-4c1f-8c2b-7a0fe862d229", APIVersion:"apps/v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-g95dr
	I1231 10:26:57.717504       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"8967477e-14f1-4c1f-8c2b-7a0fe862d229", APIVersion:"apps/v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-cqjc7
	I1231 10:26:57.738063       1 shared_informer.go:204] Caches are synced for disruption 
	I1231 10:26:57.738105       1 disruption.go:341] Sending events to api server.
	I1231 10:26:57.747623       1 shared_informer.go:204] Caches are synced for daemon sets 
	I1231 10:26:57.787815       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"bdccb7fa-0064-4ee0-9ebc-fa377e485696", APIVersion:"apps/v1", ResourceVersion:"232", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-gjbqc
	I1231 10:26:57.787853       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"4d7e5ca0-b554-4379-9229-5965d7d0d5ba", APIVersion:"apps/v1", ResourceVersion:"215", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-hdtr6
	I1231 10:26:57.808977       1 shared_informer.go:204] Caches are synced for stateful set 
	I1231 10:26:57.809469       1 shared_informer.go:204] Caches are synced for garbage collector 
	I1231 10:26:57.809546       1 shared_informer.go:204] Caches are synced for resource quota 
	E1231 10:26:57.815277       1 daemon_controller.go:302] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"bdccb7fa-0064-4ee0-9ebc-fa377e485696", ResourceVersion:"232", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63776543202, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerati
ons\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.mk\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000627120), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:
[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000627140), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.Vsphere
VirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000627180), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolume
Source)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0006271e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)
(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.
Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000627220)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000627640)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resou
rce.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000ac17c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.Eph
emeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00136cc98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0012cc5a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.Resou
rceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000a3e020)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00136cce0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	E1231 10:26:57.816055       1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"4d7e5ca0-b554-4379-9229-5965d7d0d5ba", ResourceVersion:"215", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63776543202, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000626d00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Names
pace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeS
ource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001a48740), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000626d80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000626da0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.A
zureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.16.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000626ea0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMo
de)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000ac1680), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00136ca78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServic
eAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0012cc420), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy
{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000a3e018)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00136cab8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I1231 10:26:57.878955       1 shared_informer.go:204] Caches are synced for garbage collector 
	I1231 10:26:57.879123       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1231 10:26:57.889624       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"c8554f51-52fd-4f6a-8e2b-35d79db7d7fa", APIVersion:"apps/v1", ResourceVersion:"366", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-5644d7b6d9 to 1
	I1231 10:26:57.907023       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"8967477e-14f1-4c1f-8c2b-7a0fe862d229", APIVersion:"apps/v1", ResourceVersion:"367", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-5644d7b6d9-g95dr
	I1231 10:26:58.905952       1 shared_informer.go:197] Waiting for caches to sync for resource quota
	I1231 10:26:59.006259       1 shared_informer.go:204] Caches are synced for resource quota 
	
	* 
	* ==> kube-proxy [91f9570ac59622f25f47f9d8bf9cfa1ecd2c14cb7722777629a959e7f187d512] <==
	* W1231 10:26:58.698313       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1231 10:26:58.707406       1 node.go:135] Successfully retrieved node IP: 192.168.49.2
	I1231 10:26:58.707466       1 server_others.go:149] Using iptables Proxier.
	I1231 10:26:58.708676       1 server.go:529] Version: v1.16.0
	I1231 10:26:58.709278       1 config.go:131] Starting endpoints config controller
	I1231 10:26:58.709318       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1231 10:26:58.709660       1 config.go:313] Starting service config controller
	I1231 10:26:58.709692       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1231 10:26:58.809529       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1231 10:26:58.809853       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [fddc6f96e1ab6aff7257a3f3e9e946ae7b0d808bbca6e09ffc2653e63aa5c9e4] <==
	* E1231 10:26:37.805362       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1231 10:26:37.806980       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1231 10:26:37.807074       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:26:37.807236       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:26:37.882619       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1231 10:26:37.882673       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1231 10:26:37.882844       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:26:37.882953       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1231 10:26:37.883629       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1231 10:26:37.884708       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:26:37.885065       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:26:38.807000       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1231 10:26:38.808398       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1231 10:26:38.809398       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:26:38.810390       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:26:38.884131       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1231 10:26:38.885117       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1231 10:26:38.886254       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:26:38.887753       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1231 10:26:38.888713       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1231 10:26:38.889525       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:26:38.890428       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:26:57.722315       1 factory.go:585] pod is already present in the activeQ
	E1231 10:26:57.792300       1 factory.go:585] pod is already present in the activeQ
	E1231 10:26:59.497262       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-12-31 10:26:14 UTC, end at Fri 2021-12-31 10:39:06 UTC. --
	Dec 31 10:38:12 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:12.335994     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:38:14 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:14.722211     863 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:38:14 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:14.722260     863 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:38:17 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:17.337018     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:38:22 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:22.337975     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:38:24 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:24.755273     863 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:38:24 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:24.755316     863 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:38:27 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:27.338809     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:38:32 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:32.339728     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:38:34 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:34.789276     863 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:38:34 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:34.789344     863 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:38:37 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:37.340687     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:38:38 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:38.279231     863 pod_workers.go:191] Error syncing pod a71bd990-5819-4720-aba3-d5cdc1c779dd ("kindnet-gjbqc_kube-system(a71bd990-5819-4720-aba3-d5cdc1c779dd)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-gjbqc_kube-system(a71bd990-5819-4720-aba3-d5cdc1c779dd)"
	Dec 31 10:38:42 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:42.341767     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:38:44 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:44.819275     863 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:38:44 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:44.819311     863 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:38:47 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:47.342684     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:38:52 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:52.343558     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:38:53 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:53.023800     863 pod_workers.go:191] Error syncing pod a71bd990-5819-4720-aba3-d5cdc1c779dd ("kindnet-gjbqc_kube-system(a71bd990-5819-4720-aba3-d5cdc1c779dd)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-gjbqc_kube-system(a71bd990-5819-4720-aba3-d5cdc1c779dd)"
	Dec 31 10:38:54 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:54.849944     863 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:38:54 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:54.849994     863 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:38:57 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:38:57.344624     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:39:02 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:39:02.345575     863 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:39:04 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:39:04.893269     863 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:39:04 old-k8s-version-20211231102602-6736 kubelet[863]: E1231 10:39:04.893309     863 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20211231102602-6736 -n old-k8s-version-20211231102602-6736
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20211231102602-6736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: busybox coredns-5644d7b6d9-cqjc7 storage-provisioner
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20211231102602-6736 describe pod busybox coredns-5644d7b6d9-cqjc7 storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20211231102602-6736 describe pod busybox coredns-5644d7b6d9-cqjc7 storage-provisioner: exit status 1 (73.334603ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9ddtj (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  default-token-9ddtj:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-9ddtj
	    Optional:    false
	QoS Class:       BestEffort
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  8m5s                   default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
	  Warning  FailedScheduling  5m29s (x1 over 6m59s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-cqjc7" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20211231102602-6736 describe pod busybox coredns-5644d7b6d9-cqjc7 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (485.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (308.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20211231103230-6736 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-different-port-20211231103230-6736 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.1: exit status 80 (5m6.278209203s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20211231103230-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node default-k8s-different-port-20211231103230-6736 in cluster default-k8s-different-port-20211231103230-6736
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	  - kubelet.global-housekeeping-interval=60m
	  - kubelet.housekeeping-interval=5m
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1231 10:32:30.430121  232840 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:32:30.430252  232840 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:32:30.430264  232840 out.go:310] Setting ErrFile to fd 2...
	I1231 10:32:30.430270  232840 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:32:30.430393  232840 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:32:30.430737  232840 out.go:304] Setting JSON to false
	I1231 10:32:30.432222  232840 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4505,"bootTime":1640942245,"procs":424,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:32:30.432345  232840 start.go:122] virtualization: kvm guest
	I1231 10:32:30.436338  232840 out.go:176] * [default-k8s-different-port-20211231103230-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:32:30.436565  232840 notify.go:174] Checking for updates...
	I1231 10:32:30.439206  232840 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:32:30.441692  232840 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:32:30.444186  232840 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:32:30.447175  232840 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:32:30.449923  232840 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:32:30.450675  232840 config.go:176] Loaded profile config "embed-certs-20211231102953-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:32:30.450846  232840 config.go:176] Loaded profile config "old-k8s-version-20211231102602-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I1231 10:32:30.450941  232840 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:32:30.503792  232840 docker.go:132] docker version: linux-20.10.12
	I1231 10:32:30.503898  232840 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:32:30.635856  232840 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:32:30.549118787 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:32:30.635998  232840 docker.go:237] overlay module found
	I1231 10:32:30.640396  232840 out.go:176] * Using the docker driver based on user configuration
	I1231 10:32:30.640443  232840 start.go:280] selected driver: docker
	I1231 10:32:30.640449  232840 start.go:795] validating driver "docker" against <nil>
	I1231 10:32:30.640472  232840 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:32:30.640495  232840 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:32:30.640501  232840 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:32:30.640543  232840 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:32:30.640571  232840 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I1231 10:32:30.643754  232840 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:32:30.645006  232840 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:32:30.771924  232840 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:32:30.685299994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:32:30.772087  232840 start_flags.go:284] no existing cluster config was found, will generate one from the flags 
	I1231 10:32:30.772366  232840 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1231 10:32:30.772399  232840 cni.go:93] Creating CNI manager for ""
	I1231 10:32:30.772409  232840 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:32:30.772416  232840 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1231 10:32:30.772429  232840 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1231 10:32:30.772435  232840 start_flags.go:293] Found "CNI" CNI - setting NetworkPlugin=cni
	I1231 10:32:30.772452  232840 start_flags.go:298] config:
	{Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:32:30.775804  232840 out.go:176] * Starting control plane node default-k8s-different-port-20211231103230-6736 in cluster default-k8s-different-port-20211231103230-6736
	I1231 10:32:30.775887  232840 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:32:30.779895  232840 out.go:176] * Pulling base image ...
	I1231 10:32:30.779987  232840 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:32:30.780051  232840 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4
	I1231 10:32:30.780103  232840 cache.go:57] Caching tarball of preloaded images
	I1231 10:32:30.780113  232840 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:32:30.780617  232840 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:32:30.780669  232840 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.1 on containerd
	I1231 10:32:30.780916  232840 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/config.json ...
	I1231 10:32:30.780964  232840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/config.json: {Name:mkb097073a2ee32dc476e3b27696991be2f8e170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:32:30.829749  232840 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:32:30.829777  232840 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:32:30.829787  232840 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:32:30.829819  232840 start.go:313] acquiring machines lock for default-k8s-different-port-20211231103230-6736: {Name:mk03f3b7e941bf8396158e471587c1c8924400ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:32:30.829968  232840 start.go:317] acquired machines lock for "default-k8s-different-port-20211231103230-6736" in 132.561µs
	I1231 10:32:30.829994  232840 start.go:89] Provisioning new machine with config: &{Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230
-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker} &{Name: IP: Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:32:30.830077  232840 start.go:126] createHost starting for "" (driver="docker")
	I1231 10:32:30.834750  232840 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1231 10:32:30.835086  232840 start.go:160] libmachine.API.Create for "default-k8s-different-port-20211231103230-6736" (driver="docker")
	I1231 10:32:30.835132  232840 client.go:168] LocalClient.Create starting
	I1231 10:32:30.835196  232840 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem
	I1231 10:32:30.835238  232840 main.go:130] libmachine: Decoding PEM data...
	I1231 10:32:30.835256  232840 main.go:130] libmachine: Parsing certificate...
	I1231 10:32:30.835351  232840 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem
	I1231 10:32:30.835382  232840 main.go:130] libmachine: Decoding PEM data...
	I1231 10:32:30.835405  232840 main.go:130] libmachine: Parsing certificate...
	I1231 10:32:30.835857  232840 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20211231103230-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1231 10:32:30.880904  232840 cli_runner.go:180] docker network inspect default-k8s-different-port-20211231103230-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1231 10:32:30.881023  232840 network_create.go:254] running [docker network inspect default-k8s-different-port-20211231103230-6736] to gather additional debugging logs...
	I1231 10:32:30.881053  232840 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20211231103230-6736
	W1231 10:32:30.921474  232840 cli_runner.go:180] docker network inspect default-k8s-different-port-20211231103230-6736 returned with exit code 1
	I1231 10:32:30.921517  232840 network_create.go:257] error running [docker network inspect default-k8s-different-port-20211231103230-6736]: docker network inspect default-k8s-different-port-20211231103230-6736: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20211231103230-6736
	I1231 10:32:30.921528  232840 network_create.go:259] output of [docker network inspect default-k8s-different-port-20211231103230-6736]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20211231103230-6736
	
	** /stderr **
	I1231 10:32:30.921576  232840 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:32:30.962253  232840 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-689da033f191 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:51:2a:98:ff}}
	I1231 10:32:30.963117  232840 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-821d0d66bcf3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ba:e9:21:49}}
	I1231 10:32:30.964321  232840 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc0006fe148] misses:0}
	I1231 10:32:30.964370  232840 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1231 10:32:30.964388  232840 network_create.go:106] attempt to create docker network default-k8s-different-port-20211231103230-6736 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1231 10:32:30.964446  232840 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20211231103230-6736
	I1231 10:32:31.058515  232840 network_create.go:90] docker network default-k8s-different-port-20211231103230-6736 192.168.67.0/24 created
	I1231 10:32:31.058573  232840 kic.go:106] calculated static IP "192.168.67.2" for the "default-k8s-different-port-20211231103230-6736" container
	I1231 10:32:31.058641  232840 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I1231 10:32:31.105806  232840 cli_runner.go:133] Run: docker volume create default-k8s-different-port-20211231103230-6736 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20211231103230-6736 --label created_by.minikube.sigs.k8s.io=true
	I1231 10:32:31.154527  232840 oci.go:102] Successfully created a docker volume default-k8s-different-port-20211231103230-6736
	I1231 10:32:31.154676  232840 cli_runner.go:133] Run: docker run --rm --name default-k8s-different-port-20211231103230-6736-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20211231103230-6736 --entrypoint /usr/bin/test -v default-k8s-different-port-20211231103230-6736:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib
	I1231 10:32:31.978362  232840 oci.go:106] Successfully prepared a docker volume default-k8s-different-port-20211231103230-6736
	I1231 10:32:31.978430  232840 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:32:31.978456  232840 kic.go:179] Starting extracting preloaded images to volume ...
	I1231 10:32:31.978544  232840 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20211231103230-6736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir
	I1231 10:32:50.209460  232840 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20211231103230-6736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir: (18.230852162s)
	I1231 10:32:50.209509  232840 kic.go:188] duration metric: took 18.231051 seconds to extract preloaded images to volume
	W1231 10:32:50.209557  232840 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1231 10:32:50.209564  232840 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I1231 10:32:50.209607  232840 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1231 10:32:50.327537  232840 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-different-port-20211231103230-6736 --name default-k8s-different-port-20211231103230-6736 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20211231103230-6736 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-different-port-20211231103230-6736 --network default-k8s-different-port-20211231103230-6736 --ip 192.168.67.2 --volume default-k8s-different-port-20211231103230-6736:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388
f9f56f9b
	I1231 10:32:50.945910  232840 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Running}}
	I1231 10:32:51.003058  232840 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:32:51.058435  232840 cli_runner.go:133] Run: docker exec default-k8s-different-port-20211231103230-6736 stat /var/lib/dpkg/alternatives/iptables
	I1231 10:32:51.151337  232840 oci.go:175] the created container "default-k8s-different-port-20211231103230-6736" has a running status.
	I1231 10:32:51.151388  232840 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa...
	I1231 10:32:51.741077  232840 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1231 10:32:51.853946  232840 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:32:51.907118  232840 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1231 10:32:51.907146  232840 kic_runner.go:114] Args: [docker exec --privileged default-k8s-different-port-20211231103230-6736 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1231 10:32:52.016186  232840 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:32:52.057071  232840 machine.go:88] provisioning docker machine ...
	I1231 10:32:52.057122  232840 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20211231103230-6736"
	I1231 10:32:52.057214  232840 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:32:52.097053  232840 main.go:130] libmachine: Using SSH client type: native
	I1231 10:32:52.097286  232840 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49412 <nil> <nil>}
	I1231 10:32:52.097306  232840 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20211231103230-6736 && echo "default-k8s-different-port-20211231103230-6736" | sudo tee /etc/hostname
	I1231 10:32:52.254154  232840 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20211231103230-6736
	
	I1231 10:32:52.254253  232840 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:32:52.292024  232840 main.go:130] libmachine: Using SSH client type: native
	I1231 10:32:52.292209  232840 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49412 <nil> <nil>}
	I1231 10:32:52.292292  232840 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20211231103230-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20211231103230-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20211231103230-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:32:52.435250  232840 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:32:52.435301  232840 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:32:52.435349  232840 ubuntu.go:177] setting up certificates
	I1231 10:32:52.435364  232840 provision.go:83] configureAuth start
	I1231 10:32:52.435424  232840 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:32:52.477873  232840 provision.go:138] copyHostCerts
	I1231 10:32:52.477941  232840 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:32:52.477949  232840 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:32:52.478001  232840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:32:52.478081  232840 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:32:52.478100  232840 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:32:52.478117  232840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:32:52.478168  232840 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:32:52.478185  232840 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:32:52.478200  232840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:32:52.478237  232840 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20211231103230-6736 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20211231103230-6736]
	I1231 10:32:52.735523  232840 provision.go:172] copyRemoteCerts
	I1231 10:32:52.735588  232840 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:32:52.735619  232840 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:32:52.775338  232840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:32:52.873796  232840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:32:52.895967  232840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I1231 10:32:52.916692  232840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1231 10:32:52.936482  232840 provision.go:86] duration metric: configureAuth took 501.098985ms
	I1231 10:32:52.936517  232840 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:32:52.936711  232840 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:32:52.936726  232840 machine.go:91] provisioned docker machine in 879.620503ms
	I1231 10:32:52.936733  232840 client.go:171] LocalClient.Create took 22.101595368s
	I1231 10:32:52.936769  232840 start.go:168] duration metric: libmachine.API.Create for "default-k8s-different-port-20211231103230-6736" took 22.101684674s
	I1231 10:32:52.936782  232840 start.go:267] post-start starting for "default-k8s-different-port-20211231103230-6736" (driver="docker")
	I1231 10:32:52.936789  232840 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:32:52.936850  232840 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:32:52.936903  232840 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:32:52.973999  232840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:32:53.077011  232840 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:32:53.080221  232840 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:32:53.080287  232840 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:32:53.080300  232840 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:32:53.080305  232840 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:32:53.080316  232840 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:32:53.080370  232840 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:32:53.080430  232840 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:32:53.080505  232840 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:32:53.088739  232840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:32:53.111171  232840 start.go:270] post-start completed in 174.373861ms
	I1231 10:32:53.111582  232840 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:32:53.150988  232840 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/config.json ...
	I1231 10:32:53.151235  232840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:32:53.151288  232840 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:32:53.193036  232840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:32:53.293866  232840 start.go:129] duration metric: createHost completed in 22.463776715s
	I1231 10:32:53.293911  232840 start.go:80] releasing machines lock for "default-k8s-different-port-20211231103230-6736", held for 22.463929835s
	I1231 10:32:53.293995  232840 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:32:53.343755  232840 ssh_runner.go:195] Run: systemctl --version
	I1231 10:32:53.343823  232840 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:32:53.343862  232840 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:32:53.344008  232840 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:32:53.392089  232840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:32:53.393553  232840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:32:53.507969  232840 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:32:53.522306  232840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:32:53.534229  232840 docker.go:158] disabling docker service ...
	I1231 10:32:53.534288  232840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:32:53.554945  232840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:32:53.567050  232840 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:32:53.667352  232840 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:32:53.760479  232840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:32:53.771379  232840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:32:53.787017  232840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1
fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10
KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9
kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:32:53.805071  232840 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:32:53.814216  232840 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:32:53.822588  232840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:32:53.903295  232840 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:32:53.976705  232840 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:32:53.976795  232840 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:32:53.981173  232840 start.go:458] Will wait 60s for crictl version
	I1231 10:32:53.981262  232840 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:32:54.011930  232840 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:32:54Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:33:05.060386  232840 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:33:05.091447  232840 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:33:05.091503  232840 ssh_runner.go:195] Run: containerd --version
	I1231 10:33:05.115507  232840 ssh_runner.go:195] Run: containerd --version
	I1231 10:33:05.144050  232840 out.go:176] * Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	I1231 10:33:05.144161  232840 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20211231103230-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:33:05.184212  232840 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1231 10:33:05.188386  232840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:33:05.203553  232840 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:33:05.205811  232840 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:33:05.208340  232840 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:33:05.208431  232840 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:33:05.208519  232840 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:33:05.241656  232840 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:33:05.241698  232840 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:33:05.241737  232840 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:33:05.271009  232840 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:33:05.271034  232840 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:33:05.271072  232840 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:33:05.298546  232840 cni.go:93] Creating CNI manager for ""
	I1231 10:33:05.298578  232840 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:33:05.298591  232840 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:33:05.298604  232840 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.23.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20211231103230-6736 NodeName:default-k8s-different-port-20211231103230-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:33:05.298730  232840 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20211231103230-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:33:05.298815  232840 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=default-k8s-different-port-20211231103230-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1231 10:33:05.298869  232840 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.1
	I1231 10:33:05.307802  232840 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:33:05.307866  232840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:33:05.316102  232840 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (653 bytes)
	I1231 10:33:05.331909  232840 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1231 10:33:05.347914  232840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1231 10:33:05.363222  232840 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:33:05.366653  232840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:33:05.377539  232840 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736 for IP: 192.168.67.2
	I1231 10:33:05.377674  232840 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:33:05.377718  232840 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:33:05.377766  232840 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/client.key
	I1231 10:33:05.377783  232840 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/client.crt with IP's: []
	I1231 10:33:05.517754  232840 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/client.crt ...
	I1231 10:33:05.517792  232840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/client.crt: {Name:mk4588cd42a619dee000b322b35097bcc4167fca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:33:05.517997  232840 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/client.key ...
	I1231 10:33:05.518013  232840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/client.key: {Name:mkae0c08267d9e9a81d1a7ab3d2b110c4ec3722d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:33:05.518098  232840 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key.c7fa3a9e
	I1231 10:33:05.518116  232840 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1231 10:33:05.662867  232840 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.crt.c7fa3a9e ...
	I1231 10:33:05.662905  232840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.crt.c7fa3a9e: {Name:mkb273a6bb34a1849a6e31dd7af6af852df5b317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:33:05.663141  232840 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key.c7fa3a9e ...
	I1231 10:33:05.663159  232840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key.c7fa3a9e: {Name:mkd5b397433dd7a6edc224ba5e996e9c93878a51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:33:05.663263  232840 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.crt
	I1231 10:33:05.663338  232840 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key
	I1231 10:33:05.663406  232840 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.key
	I1231 10:33:05.663428  232840 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.crt with IP's: []
	I1231 10:33:05.858128  232840 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.crt ...
	I1231 10:33:05.858163  232840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.crt: {Name:mkf2122b0b2c74bf02bae6b9b0e67f14bc06107b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:33:05.858343  232840 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.key ...
	I1231 10:33:05.858357  232840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.key: {Name:mk18f075caa0d7907e3380337636bcb12d902e5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:33:05.858535  232840 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:33:05.858574  232840 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:33:05.858594  232840 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:33:05.858614  232840 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:33:05.858640  232840 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:33:05.858664  232840 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:33:05.858700  232840 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:33:05.859759  232840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:33:05.880754  232840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1231 10:33:05.901893  232840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:33:05.927677  232840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1231 10:33:05.950146  232840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:33:05.971594  232840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:33:05.994587  232840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:33:06.015241  232840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:33:06.037600  232840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:33:06.058682  232840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:33:06.079711  232840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:33:06.100607  232840 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:33:06.115760  232840 ssh_runner.go:195] Run: openssl version
	I1231 10:33:06.121458  232840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:33:06.131519  232840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:33:06.135298  232840 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:33:06.135349  232840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:33:06.141127  232840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:33:06.151076  232840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:33:06.160446  232840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:33:06.164389  232840 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:33:06.164518  232840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:33:06.170530  232840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:33:06.181082  232840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:33:06.192057  232840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:33:06.196541  232840 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:33:06.196600  232840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:33:06.202726  232840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:33:06.211827  232840 kubeadm.go:388] StartCluster: {Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:33:06.211936  232840 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:33:06.211999  232840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:33:06.247949  232840 cri.go:87] found id: ""
	I1231 10:33:06.248415  232840 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:33:06.258913  232840 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:33:06.268360  232840 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:33:06.268424  232840 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:33:06.277504  232840 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:33:06.277568  232840 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:33:06.601233  232840 out.go:203]   - Generating certificates and keys ...
	I1231 10:33:09.557087  232840 out.go:203]   - Booting up control plane ...
	I1231 10:33:23.113776  232840 out.go:203]   - Configuring RBAC rules ...
	I1231 10:33:23.531364  232840 cni.go:93] Creating CNI manager for ""
	I1231 10:33:23.531477  232840 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:33:23.535855  232840 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:33:23.535949  232840 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:33:23.541497  232840 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.1/kubectl ...
	I1231 10:33:23.541535  232840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:33:23.563892  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:33:24.596943  232840 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.033006945s)
	I1231 10:33:24.597003  232840 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:33:24.597132  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:24.597135  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=default-k8s-different-port-20211231103230-6736 minikube.k8s.io/updated_at=2021_12_31T10_33_24_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:24.616930  232840 ops.go:34] apiserver oom_adj: -16
	I1231 10:33:24.711087  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:25.274392  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:25.774556  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:26.275075  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:26.774529  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:27.275170  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:27.775035  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:28.274795  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:28.774511  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:29.275198  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:29.774983  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:30.275342  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:30.774404  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:31.274937  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:31.774613  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:32.274276  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:32.774776  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:33.274359  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:33.774573  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:34.275312  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:34.774423  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:35.276711  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:35.774637  232840 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:33:35.911373  232840 kubeadm.go:864] duration metric: took 11.314292362s to wait for elevateKubeSystemPrivileges.
	I1231 10:33:35.911415  232840 kubeadm.go:390] StartCluster complete in 29.699597935s
	I1231 10:33:35.911437  232840 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:33:35.911558  232840 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:33:35.914659  232840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:33:36.437372  232840 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20211231103230-6736" rescaled to 1
	I1231 10:33:36.437450  232840 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:33:36.441265  232840 out.go:176] * Verifying Kubernetes components...
	I1231 10:33:36.441345  232840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:33:36.439568  232840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:33:36.439803  232840 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:33:36.439815  232840 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I1231 10:33:36.441604  232840 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:33:36.441627  232840 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:33:36.441634  232840 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:33:36.441666  232840 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:33:36.441991  232840 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:33:36.442016  232840 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20211231103230-6736"
	I1231 10:33:36.442249  232840 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:33:36.442383  232840 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:33:36.506934  232840 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:33:36.507125  232840 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:33:36.507138  232840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:33:36.507191  232840 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:33:36.561004  232840 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:33:36.561095  232840 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:33:36.561142  232840 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:33:36.561716  232840 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:33:36.587224  232840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:33:36.601481  232840 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20211231103230-6736" to be "Ready" ...
	I1231 10:33:36.601916  232840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:33:36.652597  232840 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:33:36.652750  232840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:33:36.652882  232840 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:33:36.719650  232840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:33:36.823527  232840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:33:36.918254  232840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:33:37.221982  232840 start.go:773] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I1231 10:33:37.624579  232840 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I1231 10:33:37.624620  232840 addons.go:417] enableAddons completed in 1.184806252s
	I1231 10:33:38.616558  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:33:41.110923  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:33:43.611382  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:33:46.111180  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:33:48.111872  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:33:50.112511  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:33:52.611555  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:33:55.112519  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:33:57.611681  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:00.110928  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:02.111051  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:04.112682  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:06.611941  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:09.111247  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:11.111440  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:13.111520  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:15.111758  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:17.611312  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:20.111421  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:22.111520  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:24.613409  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:27.111697  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:29.610920  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:31.611280  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:34.111498  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:36.610865  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:38.611394  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:40.611615  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:42.612982  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:45.111363  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:47.111713  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:49.112487  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:51.611136  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:53.611579  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:56.111245  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:58.111520  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:00.610615  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:02.611131  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:04.611362  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:06.611510  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:09.110821  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:11.610753  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:13.611463  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:16.111356  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:18.611389  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:21.111536  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:23.610710  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:25.611361  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:28.111622  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:30.610549  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:32.610684  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:34.612116  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:37.111767  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:39.611741  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:42.111589  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:44.611029  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:47.111644  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:49.611065  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:52.111090  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:54.111164  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:56.611524  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:59.112071  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:01.610398  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:03.611163  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:06.111054  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:08.610990  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:10.611355  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:13.111940  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:15.610865  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:17.611051  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:20.111559  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:22.611443  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:25.110897  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:27.111536  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:29.111842  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:31.611346  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:34.111760  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:36.611457  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:38.611614  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:40.611681  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:43.111250  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:45.111518  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:47.611374  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:50.111716  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:52.611710  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:55.111552  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:57.611111  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:59.611292  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:01.611646  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:04.112863  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:06.611407  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:08.611765  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:11.111509  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:13.112579  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:15.611537  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:17.611860  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:20.111361  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:22.611306  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:24.611757  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:26.611944  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:29.112062  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:31.611278  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:34.110991  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:36.611232  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:36.613526  232840 node_ready.go:38] duration metric: took 4m0.012005111s waiting for node "default-k8s-different-port-20211231103230-6736" to be "Ready" ...
	I1231 10:37:36.616503  232840 out.go:176] 
	W1231 10:37:36.616735  232840 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:37:36.616764  232840 out.go:241] * 
	* 
	W1231 10:37:36.617727  232840 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1231 10:37:36.620267  232840 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p default-k8s-different-port-20211231103230-6736 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.1": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect default-k8s-different-port-20211231103230-6736
helpers_test.go:236: (dbg) docker inspect default-k8s-different-port-20211231103230-6736:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1",
	        "Created": "2021-12-31T10:32:50.365330019Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 234235,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-12-31T10:32:50.932223463Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/hostname",
	        "HostsPath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/hosts",
	        "LogPath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1-json.log",
	        "Name": "/default-k8s-different-port-20211231103230-6736",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20211231103230-6736:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20211231103230-6736",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5-init/diff:/var/lib/docker/overlay2/c534a67c1f4c5b9f1d9e8fda7874029a0b48633454e02a0b6c810aa9cf70f36b/diff:/var/lib/docker/overlay2/de7a9fc119b7ade7c9f3b24bea5108d56a2268af67d230db0cccbbf8190cc502/diff:/var/lib/docker/overlay2/7dde3b10db81d42c3216201bd8676123e0e230bfa173b31e053af8fa206a800e/diff:/var/lib/docker/overlay2/97454ec819da74666dcc72f8e5b4653886c44f3699cecf13a018ecf95d935273/diff:/var/lib/docker/overlay2/f63f64cf22a9591cbcafeb284a265882ea890544f2b083b747047bcf47a45f11/diff:/var/lib/docker/overlay2/85a2123491b64d14f7493b3bc6a519d9df28cc3680976b9e5a7ecf783be5a304/diff:/var/lib/docker/overlay2/2ec15009b968007c3be12325edca4c12821b52fc616ace66d456038c7f9b8b26/diff:/var/lib/docker/overlay2/c45e9285ae7064da0be5f07d29b035ad54f64e529db00ffe6f2dce6ad543dccf/diff:/var/lib/docker/overlay2/d95000893ca52351a7804b110df43c1656b4bc7fbe4940c9e0ff2744233b1f93/diff:/var/lib/docker/overlay2/38e3bf
07bc49f88e1e7214644d31095a6178f5156c6a5ff3fbcaf8676336a52d/diff:/var/lib/docker/overlay2/fe45a9a5036ea98542c5fd7808e66f8b6ffe9d0c545de700f7cbe39d6280e0e3/diff:/var/lib/docker/overlay2/8cd90b043b30c66cbeacbe4ca21881d85717c6e16f1fe627e2eecdeaf461da6c/diff:/var/lib/docker/overlay2/7221b66a068ae65d9a1c089fe5c60e7230aa950b3d47eb7e90a1776c2a45931b/diff:/var/lib/docker/overlay2/86ce10ecfb90cfd8e1f7d722715353b823ddfa58b93eefa2c1efffff899916ee/diff:/var/lib/docker/overlay2/12ea1316fade88c6e1d2272d6dbeb629875ff96f4dd6f6ead0ff6a4e7e3a3067/diff:/var/lib/docker/overlay2/0fe276873166ac11302901f1bb2c547c71e23fac2197e717e789b6ce7d1eb3ce/diff:/var/lib/docker/overlay2/81bd863acb94ff576a1f1f046ee87693930a12209e915a09f2a720c77d1357f3/diff:/var/lib/docker/overlay2/8875d87537d51e137e2fbff41fe61dadeefa46057e0c93d257ec4754b5ba95f3/diff:/var/lib/docker/overlay2/86e75ae4c708ad2e78d21326ca3b7f552b7903d8c7a0a66e086feaacd3a8c002/diff:/var/lib/docker/overlay2/c688ba4459332409ab260296b96d566d186d307b81308fb3b59724fe6fed3c7d/diff:/var/lib/d
ocker/overlay2/45c4b946a887c9de4f592591b3e65041083baba75d5bed9b0995339de1cafd02/diff:/var/lib/docker/overlay2/3927af3e4dc5e4af94ac76cbe5a3b9759b80e10646aabc655f5c9b56ae13e650/diff:/var/lib/docker/overlay2/11dd3efafe062edcbd4dd837545d570514492786b08abf77f57fe300d225eaa5/diff:/var/lib/docker/overlay2/b8e0f4d5369eb0c633092d9e21501d99ccb27fafaad6667209e1d988c337ae9b/diff:/var/lib/docker/overlay2/c605476b22b48b266163ca6ee5fcb52b15458f1d7a77458390109b6e795f72f3/diff:/var/lib/docker/overlay2/19b31873fbc8858f6b143ea8dc98572f32110c70c29732285bdea89dd6163287/diff:/var/lib/docker/overlay2/171afcaaccc199dc3a653d1bf42453959e7c6bd7bc1e8340f94454b5f5fb6d2d/diff:/var/lib/docker/overlay2/2000b0fa8d671b6e34ec28228440eb7e6a5ddcd6b2cb657836d8522e3fb4454e/diff:/var/lib/docker/overlay2/1294bf56384681b45b611fe07d7b32c6c87a67eee568452d1f8123bf319f7a4c/diff:/var/lib/docker/overlay2/3199bb762e90968c1998250099f781c0e8954f4d62df39a80a384b9208149421/diff:/var/lib/docker/overlay2/036a54cd701c84ee783285b30f6cf33b92858f1a11ebd5db15cffffea12
adcd9/diff:/var/lib/docker/overlay2/552a166f9d1f0d79a7c45c08db35c6db9366e5d8e1321b62ffc3d8a9ea458797/diff:/var/lib/docker/overlay2/98f34e7af675026bd6dd9aaf43b996b5a992e04fe33f238d16d38e7ccad2fb91/diff:/var/lib/docker/overlay2/c1ed97ca4514bd37713c82daa6ba805285f12b5aa51599e9f915b8755fbf55a7/diff:/var/lib/docker/overlay2/86b8ba00550442fc509be298ff07205e99a171d56a70a2e1735cdfd4a9ab745e/diff:/var/lib/docker/overlay2/109092c1424dfee38a4eb63e39a9bb5d3de4b011f84e30bd021bf73ee2f47aa1/diff:/var/lib/docker/overlay2/87fd60ac9e5dd5a701def2c0a9e1c64463bd4779be06a990c86e5561a4349312/diff:/var/lib/docker/overlay2/145545da7fd578da8a41adec4cec3a9b347b4e609c900ee9365ae7735c7859c6/diff:/var/lib/docker/overlay2/e6a20d33de71269404b7fdc1fa6f915dcbc5a337e0625ef4536cb942bb55c19b/diff:/var/lib/docker/overlay2/38829872ccac03d0dae45b93d1f34e61fd179b6fc19f22fd27d4431ee773090f/diff:/var/lib/docker/overlay2/3c7fa7cc3994b36d6b984ebe7c46b535829f02814130486c4ecf5a0976f4f7dc/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20211231103230-6736",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20211231103230-6736/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20211231103230-6736",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20211231103230-6736",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20211231103230-6736",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "be0a219411bd67bdb3a91065eefcb9498528f3367077de2d90f3a0ebd5f1a6ea",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49412"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49411"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49407"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49410"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49409"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/be0a219411bd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20211231103230-6736": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "282fb8467680",
	                        "default-k8s-different-port-20211231103230-6736"
	                    ],
	                    "NetworkID": "e1788769ca7736a71ee22c1f2c56bcd2d9ff496f9d3c2faac492c32b43c45e2f",
	                    "EndpointID": "3f15aedb2298185e311300c15ed78486951e6e1f525e08afdb042e339fa53d16",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20211231103230-6736 -n default-k8s-different-port-20211231103230-6736
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20211231103230-6736 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-different-port-20211231103230-6736 logs -n 25: (1.123346302s)
helpers_test.go:253: TestStartStop/group/default-k8s-different-port/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                  Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p no-preload-20211231102928-6736                          | no-preload-20211231102928-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:29:28 UTC | Fri, 31 Dec 2021 10:30:43 UTC |
	|         | --memory=2200 --alsologtostderr                            |                                           |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                           |         |         |                               |                               |
	|         | --driver=docker                                            |                                           |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                           |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | no-preload-20211231102928-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:30:52 UTC | Fri, 31 Dec 2021 10:30:52 UTC |
	|         | no-preload-20211231102928-6736                             |                                           |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                           |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                           |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736       | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:30:59 UTC | Fri, 31 Dec 2021 10:31:00 UTC |
	|         | logs -n 25                                                 |                                           |         |         |                               |                               |
	| stop    | -p                                                         | no-preload-20211231102928-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:30:53 UTC | Fri, 31 Dec 2021 10:31:13 UTC |
	|         | no-preload-20211231102928-6736                             |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                           |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20211231102928-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:31:13 UTC | Fri, 31 Dec 2021 10:31:13 UTC |
	|         | no-preload-20211231102928-6736                             |                                           |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                           |         |         |                               |                               |
	| start   | -p no-preload-20211231102928-6736                          | no-preload-20211231102928-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:31:13 UTC | Fri, 31 Dec 2021 10:32:11 UTC |
	|         | --memory=2200 --alsologtostderr                            |                                           |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                           |         |         |                               |                               |
	|         | --driver=docker                                            |                                           |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                           |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20211231102928-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:22 UTC | Fri, 31 Dec 2021 10:32:22 UTC |
	|         | no-preload-20211231102928-6736                             |                                           |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                           |         |         |                               |                               |
	| pause   | -p                                                         | no-preload-20211231102928-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:22 UTC | Fri, 31 Dec 2021 10:32:23 UTC |
	|         | no-preload-20211231102928-6736                             |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                           |         |         |                               |                               |
	| -p      | enable-default-cni-20211231101406-6736                     | enable-default-cni-20211231101406-6736    | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:24 UTC | Fri, 31 Dec 2021 10:32:25 UTC |
	|         | logs -n 25                                                 |                                           |         |         |                               |                               |
	| unpause | -p                                                         | no-preload-20211231102928-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:24 UTC | Fri, 31 Dec 2021 10:32:25 UTC |
	|         | no-preload-20211231102928-6736                             |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                           |         |         |                               |                               |
	| delete  | -p                                                         | enable-default-cni-20211231101406-6736    | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:26 UTC | Fri, 31 Dec 2021 10:32:29 UTC |
	|         | enable-default-cni-20211231101406-6736                     |                                           |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20211231102928-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:26 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | no-preload-20211231102928-6736                             |                                           |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20211231103229-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:29 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | disable-driver-mounts-20211231103229-6736                  |                                           |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20211231102928-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:30 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | no-preload-20211231102928-6736                             |                                           |         |         |                               |                               |
	| start   | -p newest-cni-20211231103230-6736 --memory=2200            | newest-cni-20211231103230-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:30 UTC | Fri, 31 Dec 2021 10:33:37 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                           |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                           |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                           |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                           |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                           |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20211231103230-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:37 UTC | Fri, 31 Dec 2021 10:33:38 UTC |
	|         | newest-cni-20211231103230-6736                             |                                           |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                           |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                           |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20211231103230-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:38 UTC | Fri, 31 Dec 2021 10:33:58 UTC |
	|         | newest-cni-20211231103230-6736                             |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                           |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20211231103230-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:58 UTC | Fri, 31 Dec 2021 10:33:58 UTC |
	|         | newest-cni-20211231103230-6736                             |                                           |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                           |         |         |                               |                               |
	| start   | -p newest-cni-20211231103230-6736 --memory=2200            | newest-cni-20211231103230-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:58 UTC | Fri, 31 Dec 2021 10:34:51 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                           |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                           |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                           |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                           |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                           |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20211231103230-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:52 UTC |
	|         | newest-cni-20211231103230-6736                             |                                           |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                           |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20211231103230-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:53 UTC |
	|         | newest-cni-20211231103230-6736                             |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                           |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                            | embed-certs-20211231102953-6736           | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:53 UTC | Fri, 31 Dec 2021 10:34:54 UTC |
	|         | logs -n 25                                                 |                                           |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20211231103230-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:54 UTC | Fri, 31 Dec 2021 10:34:55 UTC |
	|         | newest-cni-20211231103230-6736                             |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                           |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20211231103230-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:56 UTC | Fri, 31 Dec 2021 10:34:59 UTC |
	|         | newest-cni-20211231103230-6736                             |                                           |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20211231103230-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:59 UTC | Fri, 31 Dec 2021 10:35:00 UTC |
	|         | newest-cni-20211231103230-6736                             |                                           |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/31 10:33:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.17.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1231 10:33:58.889763  239842 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:33:58.889968  239842 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:33:58.890015  239842 out.go:310] Setting ErrFile to fd 2...
	I1231 10:33:58.890028  239842 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:33:58.890301  239842 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:33:58.890755  239842 out.go:304] Setting JSON to false
	I1231 10:33:58.892928  239842 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4593,"bootTime":1640942245,"procs":596,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:33:58.893046  239842 start.go:122] virtualization: kvm guest
	I1231 10:33:58.896075  239842 out.go:176] * [newest-cni-20211231103230-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:33:58.898770  239842 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:33:58.896425  239842 notify.go:174] Checking for updates...
	I1231 10:33:58.901377  239842 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:33:58.904292  239842 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:33:58.906743  239842 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:33:58.909823  239842 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:33:58.911269  239842 config.go:176] Loaded profile config "newest-cni-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2-rc.0
	I1231 10:33:58.911745  239842 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:33:58.960055  239842 docker.go:132] docker version: linux-20.10.12
	I1231 10:33:58.960175  239842 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:33:59.061340  239842 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:49 SystemTime:2021-12-31 10:33:58.994194285 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:33:59.061470  239842 docker.go:237] overlay module found
	I1231 10:33:59.064676  239842 out.go:176] * Using the docker driver based on existing profile
	I1231 10:33:59.064715  239842 start.go:280] selected driver: docker
	I1231 10:33:59.064721  239842 start.go:795] validating driver "docker" against &{Name:newest-cni-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2-rc.0 ClusterName:newest-cni-20211231103230-6736 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.2-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.
io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:33:59.064864  239842 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:33:59.064877  239842 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:33:59.064882  239842 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:33:59.064913  239842 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:33:59.064992  239842 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:33:59.067375  239842 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:33:59.068079  239842 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:33:59.179516  239842 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:49 SystemTime:2021-12-31 10:33:59.103117577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	W1231 10:33:59.179717  239842 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:33:59.179756  239842 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:33:59.182917  239842 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:33:59.183064  239842 start_flags.go:829] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1231 10:33:59.183104  239842 cni.go:93] Creating CNI manager for ""
	I1231 10:33:59.183116  239842 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:33:59.183124  239842 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1231 10:33:59.183133  239842 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1231 10:33:59.183142  239842 start_flags.go:298] config:
	{Name:newest-cni-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2-rc.0 ClusterName:newest-cni-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.2-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries
:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:33:59.186972  239842 out.go:176] * Starting control plane node newest-cni-20211231103230-6736 in cluster newest-cni-20211231103230-6736
	I1231 10:33:59.187046  239842 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:33:59.189137  239842 out.go:176] * Pulling base image ...
	I1231 10:33:59.189233  239842 preload.go:132] Checking if preload exists for k8s version v1.23.2-rc.0 and runtime containerd
	I1231 10:33:59.189311  239842 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.2-rc.0-containerd-overlay2-amd64.tar.lz4
	I1231 10:33:59.189340  239842 cache.go:57] Caching tarball of preloaded images
	I1231 10:33:59.189397  239842 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:33:59.189738  239842 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.2-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:33:59.189762  239842 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.2-rc.0 on containerd
	I1231 10:33:59.189945  239842 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/config.json ...
	I1231 10:33:59.232549  239842 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:33:59.232610  239842 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:33:59.232633  239842 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:33:59.232677  239842 start.go:313] acquiring machines lock for newest-cni-20211231103230-6736: {Name:mkea4a41968f23a7f754ed1625a06fab4a3434ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:33:59.232826  239842 start.go:317] acquired machines lock for "newest-cni-20211231103230-6736" in 116.689µs
	I1231 10:33:59.232869  239842 start.go:93] Skipping create...Using existing machine configuration
	I1231 10:33:59.232883  239842 fix.go:55] fixHost starting: 
	I1231 10:33:59.233271  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:33:59.275589  239842 fix.go:108] recreateIfNeeded on newest-cni-20211231103230-6736: state=Stopped err=<nil>
	W1231 10:33:59.275624  239842 fix.go:134] unexpected machine state, will restart: <nil>
	I1231 10:33:57.611681  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:00.110928  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:33:58.633918  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:00.634303  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:03.134303  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:33:59.278797  239842 out.go:176] * Restarting existing docker container for "newest-cni-20211231103230-6736" ...
	I1231 10:33:59.278893  239842 cli_runner.go:133] Run: docker start newest-cni-20211231103230-6736
	I1231 10:33:59.808917  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:33:59.856009  239842 kic.go:420] container "newest-cni-20211231103230-6736" state is running.
	I1231 10:33:59.856565  239842 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20211231103230-6736
	I1231 10:33:59.904405  239842 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/config.json ...
	I1231 10:33:59.904680  239842 machine.go:88] provisioning docker machine ...
	I1231 10:33:59.904703  239842 ubuntu.go:169] provisioning hostname "newest-cni-20211231103230-6736"
	I1231 10:33:59.904740  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:33:59.947914  239842 main.go:130] libmachine: Using SSH client type: native
	I1231 10:33:59.948105  239842 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I1231 10:33:59.948124  239842 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20211231103230-6736 && echo "newest-cni-20211231103230-6736" | sudo tee /etc/hostname
	I1231 10:33:59.949036  239842 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47278->127.0.0.1:49417: read: connection reset by peer
	I1231 10:34:03.099652  239842 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20211231103230-6736
	
	I1231 10:34:03.099755  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:03.144059  239842 main.go:130] libmachine: Using SSH client type: native
	I1231 10:34:03.144255  239842 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49417 <nil> <nil>}
	I1231 10:34:03.144303  239842 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20211231103230-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20211231103230-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20211231103230-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:34:03.284958  239842 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:34:03.284998  239842 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:34:03.285062  239842 ubuntu.go:177] setting up certificates
	I1231 10:34:03.285076  239842 provision.go:83] configureAuth start
	I1231 10:34:03.285144  239842 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20211231103230-6736
	I1231 10:34:03.331319  239842 provision.go:138] copyHostCerts
	I1231 10:34:03.331385  239842 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:34:03.331393  239842 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:34:03.331460  239842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:34:03.331544  239842 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:34:03.331558  239842 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:34:03.331579  239842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:34:03.331625  239842 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:34:03.331638  239842 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:34:03.331657  239842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:34:03.331695  239842 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20211231103230-6736 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20211231103230-6736]
	I1231 10:34:03.586959  239842 provision.go:172] copyRemoteCerts
	I1231 10:34:03.587049  239842 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:34:03.587091  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:03.625102  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:03.720644  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:34:03.740753  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I1231 10:34:03.760615  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1231 10:34:03.780213  239842 provision.go:86] duration metric: configureAuth took 495.114028ms
	I1231 10:34:03.780262  239842 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:34:03.780481  239842 config.go:176] Loaded profile config "newest-cni-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2-rc.0
	I1231 10:34:03.780495  239842 machine.go:91] provisioned docker machine in 3.875801286s
	I1231 10:34:03.780501  239842 start.go:267] post-start starting for "newest-cni-20211231103230-6736" (driver="docker")
	I1231 10:34:03.780506  239842 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:34:03.780545  239842 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:34:03.780586  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:02.111051  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:04.112682  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:05.633463  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:08.133359  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:03.820417  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:03.925875  239842 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:34:03.930813  239842 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:34:03.930851  239842 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:34:03.930864  239842 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:34:03.930872  239842 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:34:03.930885  239842 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:34:03.930949  239842 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:34:03.931028  239842 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:34:03.931126  239842 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:34:03.940439  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:34:03.962877  239842 start.go:270] post-start completed in 182.361292ms
	I1231 10:34:03.962946  239842 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:34:03.962978  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:04.003202  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:04.101581  239842 fix.go:57] fixHost completed within 4.868692803s
	I1231 10:34:04.101619  239842 start.go:80] releasing machines lock for "newest-cni-20211231103230-6736", held for 4.868770707s
	I1231 10:34:04.101715  239842 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20211231103230-6736
	I1231 10:34:04.141381  239842 ssh_runner.go:195] Run: systemctl --version
	I1231 10:34:04.141446  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:04.141386  239842 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:34:04.141549  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:04.184922  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:04.186345  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:04.300405  239842 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:34:04.314857  239842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:34:04.326678  239842 docker.go:158] disabling docker service ...
	I1231 10:34:04.326732  239842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:34:04.338952  239842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:34:04.350188  239842 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:34:04.441879  239842 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:34:04.523149  239842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:34:04.533774  239842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:34:04.548558  239842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:34:04.563078  239842 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:34:04.570184  239842 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:34:04.577904  239842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:34:04.661512  239842 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:34:04.743598  239842 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:34:04.743665  239842 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:34:04.748322  239842 start.go:458] Will wait 60s for crictl version
	I1231 10:34:04.748376  239842 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:34:04.776556  239842 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:34:04Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:34:06.611941  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:09.111247  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:10.633648  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:13.133617  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:11.111440  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:13.111520  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:15.111758  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:15.824426  239842 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:34:15.852646  239842 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:34:15.852709  239842 ssh_runner.go:195] Run: containerd --version
	I1231 10:34:15.875454  239842 ssh_runner.go:195] Run: containerd --version
	I1231 10:34:15.899983  239842 out.go:176] * Preparing Kubernetes v1.23.2-rc.0 on containerd 1.4.12 ...
	I1231 10:34:15.900089  239842 cli_runner.go:133] Run: docker network inspect newest-cni-20211231103230-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:34:15.938827  239842 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1231 10:34:15.942919  239842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:34:15.956827  239842 out.go:176]   - kubelet.network-plugin=cni
	I1231 10:34:15.959219  239842 out.go:176]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I1231 10:34:15.961312  239842 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:34:15.963727  239842 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:34:15.632937  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:18.132486  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:15.966056  239842 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:34:15.966172  239842 preload.go:132] Checking if preload exists for k8s version v1.23.2-rc.0 and runtime containerd
	I1231 10:34:15.966243  239842 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:34:15.995265  239842 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:34:15.995290  239842 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:34:15.995331  239842 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:34:16.022435  239842 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:34:16.022458  239842 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:34:16.022504  239842 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:34:16.051296  239842 cni.go:93] Creating CNI manager for ""
	I1231 10:34:16.051321  239842 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:34:16.051335  239842 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I1231 10:34:16.051348  239842 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.2-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20211231103230-6736 NodeName:newest-cni-20211231103230-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-
elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:34:16.051545  239842 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20211231103230-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.2-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:34:16.051662  239842 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.2-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --global-housekeeping-interval=60m --hostname-override=newest-cni-20211231103230-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.2-rc.0 ClusterName:newest-cni-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1231 10:34:16.051732  239842 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2-rc.0
	I1231 10:34:16.059993  239842 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:34:16.060063  239842 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:34:16.068090  239842 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (679 bytes)
	I1231 10:34:16.083548  239842 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1231 10:34:16.098901  239842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1231 10:34:16.116150  239842 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:34:16.119950  239842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:34:16.130712  239842 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736 for IP: 192.168.76.2
	I1231 10:34:16.130826  239842 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:34:16.130889  239842 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:34:16.130980  239842 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/client.key
	I1231 10:34:16.131059  239842 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/apiserver.key.31bdca25
	I1231 10:34:16.131233  239842 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/proxy-client.key
	I1231 10:34:16.131373  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:34:16.131415  239842 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:34:16.131431  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:34:16.131463  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:34:16.131498  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:34:16.131533  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:34:16.131586  239842 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:34:16.132861  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:34:16.154694  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1231 10:34:16.176000  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:34:16.199525  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/newest-cni-20211231103230-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1231 10:34:16.222324  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:34:16.245022  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:34:16.269767  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:34:16.296529  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:34:16.320759  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:34:16.344786  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:34:16.368318  239842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:34:16.389773  239842 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:34:16.405384  239842 ssh_runner.go:195] Run: openssl version
	I1231 10:34:16.411011  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:34:16.419791  239842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:34:16.423595  239842 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:34:16.423671  239842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:34:16.429324  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:34:16.437405  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:34:16.446166  239842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:34:16.449927  239842 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:34:16.450001  239842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:34:16.455858  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:34:16.464618  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:34:16.474030  239842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:34:16.478705  239842 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:34:16.478802  239842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:34:16.485176  239842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:34:16.493913  239842 kubeadm.go:388] StartCluster: {Name:newest-cni-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2-rc.0 ClusterName:newest-cni-20211231103230-6736 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.2-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 Me
tricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:34:16.494024  239842 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:34:16.494078  239842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:34:16.524647  239842 cri.go:87] found id: "155c2bf79c8bd8b8bb6dbeb56755a02c58b466899f6a9e677748a3b0d10686ed"
	I1231 10:34:16.524684  239842 cri.go:87] found id: "8d8c36be3cd9499af063e0e758f58669f30330492f2849aca5442c85468a63bd"
	I1231 10:34:16.524693  239842 cri.go:87] found id: "24d8438942a01eb4000995dcc71ad4b52b67206a5f4af1954e644740df495c62"
	I1231 10:34:16.524705  239842 cri.go:87] found id: "3c9717f9388efe5a2acdad99a248f83aeb684c526c2e02b91a89fd56616cb240"
	I1231 10:34:16.524711  239842 cri.go:87] found id: "f3c21531b1b800501ebef6dcb786a58ec6fe912c6da1b160f1d0589524631a5f"
	I1231 10:34:16.524717  239842 cri.go:87] found id: "f1cfa511a7febb69b44502241a766bc5f1da2755e54f022d08281b3b3c4551ee"
	I1231 10:34:16.524722  239842 cri.go:87] found id: ""
	I1231 10:34:16.524779  239842 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1231 10:34:16.541313  239842 cri.go:114] JSON = null
	W1231 10:34:16.541371  239842 kubeadm.go:395] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I1231 10:34:16.541451  239842 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:34:16.549111  239842 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1231 10:34:16.556666  239842 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:16.557897  239842 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20211231103230-6736" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:34:16.558554  239842 kubeconfig.go:127] "newest-cni-20211231103230-6736" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig - will repair!
	I1231 10:34:16.559659  239842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:34:16.562421  239842 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1231 10:34:16.570276  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:16.570337  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:16.585156  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:16.785503  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:16.785582  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:16.803739  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:16.986016  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:16.986082  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:17.003346  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:17.185436  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:17.185522  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:17.201434  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:17.385753  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:17.385834  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:17.401272  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:17.585464  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:17.585557  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:17.602009  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:17.786319  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:17.786390  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:17.801129  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:17.985405  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:17.985492  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:18.002220  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:18.185431  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:18.185522  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:18.200513  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:18.385798  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:18.385886  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:18.404508  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:18.585753  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:18.585841  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:18.601638  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:18.785922  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:18.786018  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1231 10:34:17.611312  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:20.111421  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	W1231 10:34:18.803740  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:18.985967  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:18.986067  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:19.002808  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:19.186110  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:19.186208  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:19.202949  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:19.386242  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:19.386333  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:19.403965  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:19.586311  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:19.586409  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:19.602884  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:34:19.602907  239842 api_server.go:165] Checking apiserver status ...
	I1231 10:34:19.602960  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:34:19.619190  239842 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W1231 10:34:19.619231  239842 kubeadm.go:600] needs reconfigure: apiserver error: timed out waiting for the condition
	I1231 10:34:19.619258  239842 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2-rc.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1231 10:34:20.346752  239842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:34:20.358199  239842 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:34:20.367408  239842 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:34:20.367462  239842 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:34:20.377071  239842 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:34:20.377137  239842 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:34:20.729691  239842 out.go:203]   - Generating certificates and keys ...
	I1231 10:34:20.132812  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:22.133593  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:21.380175  239842 out.go:203]   - Booting up control plane ...
	I1231 10:34:22.111520  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:24.613409  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:24.633597  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:27.134575  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:27.111697  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:29.610920  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:29.633090  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:31.633767  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:34.434058  239842 out.go:203]   - Configuring RBAC rules ...
	I1231 10:34:34.849120  239842 cni.go:93] Creating CNI manager for ""
	I1231 10:34:34.849163  239842 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:34:31.611280  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:34.111498  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:34.133525  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:36.632536  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:34.852842  239842 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:34:34.852967  239842 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:34:34.857479  239842 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl ...
	I1231 10:34:34.857506  239842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:34:34.874366  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:34:35.630846  239842 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:34:35.630937  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:35.630961  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=newest-cni-20211231103230-6736 minikube.k8s.io/updated_at=2021_12_31T10_34_35_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:35.651672  239842 ops.go:34] apiserver oom_adj: -16
	I1231 10:34:35.719909  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:36.316492  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:36.815787  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:37.316079  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:37.816415  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:38.316091  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:36.610865  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:38.611394  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:38.632694  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:40.634440  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:43.133415  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:38.815995  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:39.316530  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:39.816139  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:40.315766  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:40.815697  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:41.316307  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:41.816101  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:42.316374  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:42.816315  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:43.316484  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:40.611615  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:42.612982  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:45.111363  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:45.633500  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:48.131867  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:43.816729  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:44.316037  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:44.815895  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:45.317082  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:45.816101  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:46.315798  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:46.815886  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:47.316269  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:47.815962  239842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:34:48.003966  239842 kubeadm.go:864] duration metric: took 12.373088498s to wait for elevateKubeSystemPrivileges.
	I1231 10:34:48.003999  239842 kubeadm.go:390] StartCluster complete in 31.510097342s
	I1231 10:34:48.004022  239842 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:34:48.004121  239842 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:34:48.005914  239842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:34:48.526056  239842 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20211231103230-6736" rescaled to 1
	I1231 10:34:48.526130  239842 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.2-rc.0 ControlPlane:true Worker:true}
	I1231 10:34:48.526152  239842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:34:48.528894  239842 out.go:176] * Verifying Kubernetes components...
	I1231 10:34:48.528963  239842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:34:48.526213  239842 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I1231 10:34:48.529044  239842 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20211231103230-6736"
	I1231 10:34:48.529069  239842 addons.go:65] Setting dashboard=true in profile "newest-cni-20211231103230-6736"
	I1231 10:34:48.529085  239842 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20211231103230-6736"
	I1231 10:34:48.529105  239842 addons.go:65] Setting metrics-server=true in profile "newest-cni-20211231103230-6736"
	I1231 10:34:48.529112  239842 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20211231103230-6736"
	I1231 10:34:48.529125  239842 addons.go:153] Setting addon metrics-server=true in "newest-cni-20211231103230-6736"
	W1231 10:34:48.529134  239842 addons.go:165] addon metrics-server should already be in state true
	I1231 10:34:48.529072  239842 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20211231103230-6736"
	I1231 10:34:48.529172  239842 host.go:66] Checking if "newest-cni-20211231103230-6736" exists ...
	W1231 10:34:48.529178  239842 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:34:48.529204  239842 host.go:66] Checking if "newest-cni-20211231103230-6736" exists ...
	I1231 10:34:48.529086  239842 addons.go:153] Setting addon dashboard=true in "newest-cni-20211231103230-6736"
	W1231 10:34:48.529469  239842 addons.go:165] addon dashboard should already be in state true
	I1231 10:34:48.529489  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:34:48.529507  239842 host.go:66] Checking if "newest-cni-20211231103230-6736" exists ...
	I1231 10:34:48.529669  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:34:48.529673  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:34:48.526424  239842 config.go:176] Loaded profile config "newest-cni-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.2-rc.0
	I1231 10:34:48.530032  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:34:48.586829  239842 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1231 10:34:48.590085  239842 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1231 10:34:48.589138  239842 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20211231103230-6736"
	W1231 10:34:48.590194  239842 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:34:48.590280  239842 host.go:66] Checking if "newest-cni-20211231103230-6736" exists ...
	I1231 10:34:48.590345  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1231 10:34:48.590360  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1231 10:34:48.590419  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:48.593339  239842 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1231 10:34:48.593453  239842 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1231 10:34:48.593465  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1231 10:34:48.593553  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:48.591124  239842 cli_runner.go:133] Run: docker container inspect newest-cni-20211231103230-6736 --format={{.State.Status}}
	I1231 10:34:48.600479  239842 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:34:48.601216  239842 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:34:48.601358  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:34:48.601494  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:48.659910  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:48.665029  239842 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:34:48.665195  239842 api_server.go:51] waiting for apiserver process to appear ...
	I1231 10:34:48.665258  239842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1231 10:34:48.667125  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:48.679789  239842 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:34:48.679853  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:34:48.679963  239842 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20211231103230-6736
	I1231 10:34:48.686805  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:48.739096  239842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49417 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/newest-cni-20211231103230-6736/id_rsa Username:docker}
	I1231 10:34:47.111713  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:49.112487  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:48.902948  239842 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1231 10:34:48.902984  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1231 10:34:48.903155  239842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:34:48.903256  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1231 10:34:48.903281  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1231 10:34:49.003786  239842 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1231 10:34:49.003825  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1231 10:34:49.006836  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1231 10:34:49.006870  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1231 10:34:49.007822  239842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:34:49.095461  239842 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:34:49.095567  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1231 10:34:49.102715  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1231 10:34:49.102747  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1231 10:34:49.206111  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1231 10:34:49.206155  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1231 10:34:49.207830  239842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:34:49.384988  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1231 10:34:49.385020  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1231 10:34:49.392682  239842 api_server.go:71] duration metric: took 866.519409ms to wait for apiserver process to appear ...
	I1231 10:34:49.392800  239842 api_server.go:87] waiting for apiserver healthz status ...
	I1231 10:34:49.392828  239842 start.go:773] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
	I1231 10:34:49.392831  239842 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1231 10:34:49.403071  239842 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1231 10:34:49.404217  239842 api_server.go:140] control plane version: v1.23.2-rc.0
	I1231 10:34:49.404287  239842 api_server.go:130] duration metric: took 11.464686ms to wait for apiserver health ...
	I1231 10:34:49.404308  239842 system_pods.go:43] waiting for kube-system pods to appear ...
	I1231 10:34:49.488542  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1231 10:34:49.488611  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1231 10:34:49.497619  239842 system_pods.go:59] 7 kube-system pods found
	I1231 10:34:49.497777  239842 system_pods.go:61] "coredns-64897985d-fh6sl" [f7a107d1-df7c-4b28-8f0d-eb5e6da38e4f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I1231 10:34:49.497853  239842 system_pods.go:61] "etcd-newest-cni-20211231103230-6736" [9212b001-2098-4378-b03f-05510269335f] Running
	I1231 10:34:49.497879  239842 system_pods.go:61] "kindnet-tkvfw" [4abcfbc0-b4e3-41f0-89d4-26c9a356f41e] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1231 10:34:49.497904  239842 system_pods.go:61] "kube-apiserver-newest-cni-20211231103230-6736" [a37117a3-3753-4e5b-b2f3-5d129612ee51] Running
	I1231 10:34:49.497934  239842 system_pods.go:61] "kube-controller-manager-newest-cni-20211231103230-6736" [bae63363-f3ff-4de8-9600-07baeb9a1915] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1231 10:34:49.497959  239842 system_pods.go:61] "kube-proxy-228gt" [8d98d417-a803-474a-b07d-aa7c25391bd9] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1231 10:34:49.497987  239842 system_pods.go:61] "kube-scheduler-newest-cni-20211231103230-6736" [1a0e5162-834e-4b7f-815a-66f6b1511153] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1231 10:34:49.498011  239842 system_pods.go:74] duration metric: took 93.696354ms to wait for pod list to return data ...
	I1231 10:34:49.498038  239842 default_sa.go:34] waiting for default service account to be created ...
	I1231 10:34:49.501869  239842 default_sa.go:45] found service account: "default"
	I1231 10:34:49.501947  239842 default_sa.go:55] duration metric: took 3.889685ms for default service account to be created ...
	I1231 10:34:49.501972  239842 kubeadm.go:542] duration metric: took 975.813843ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1231 10:34:49.502004  239842 node_conditions.go:102] verifying NodePressure condition ...
	I1231 10:34:49.584377  239842 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I1231 10:34:49.584456  239842 node_conditions.go:123] node cpu capacity is 8
	I1231 10:34:49.584501  239842 node_conditions.go:105] duration metric: took 82.480868ms to run NodePressure ...
	I1231 10:34:49.584516  239842 start.go:211] waiting for startup goroutines ...
	I1231 10:34:49.590387  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1231 10:34:49.590472  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1231 10:34:49.687079  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1231 10:34:49.687111  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1231 10:34:49.786712  239842 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:34:49.786813  239842 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1231 10:34:49.897022  239842 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:34:50.206491  239842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.303301914s)
	I1231 10:34:50.206645  239842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.198797907s)
	I1231 10:34:50.596403  239842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.388529563s)
	I1231 10:34:50.596444  239842 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20211231103230-6736"
	I1231 10:34:51.611369  239842 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.714292146s)
	I1231 10:34:51.613756  239842 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1231 10:34:51.613803  239842 addons.go:417] enableAddons completed in 3.087611947s
	I1231 10:34:51.651587  239842 start.go:493] kubectl: 1.23.1, cluster: 1.23.2-rc.0 (minor skew: 0)
	I1231 10:34:51.654312  239842 out.go:176] * Done! kubectl is now configured to use "newest-cni-20211231103230-6736" cluster and "default" namespace by default
	I1231 10:34:50.132740  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:52.134471  219726 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:34:53.136634  219726 node_ready.go:38] duration metric: took 4m0.01292471s waiting for node "embed-certs-20211231102953-6736" to be "Ready" ...
	I1231 10:34:53.140709  219726 out.go:176] 
	W1231 10:34:53.140941  219726 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:34:53.140961  219726 out.go:241] * 
	W1231 10:34:53.141727  219726 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1231 10:34:51.611136  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:53.611579  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:56.111245  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:34:58.111520  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:00.610615  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:02.611131  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:04.611362  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:06.611510  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:09.110821  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:11.610753  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:13.611463  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:16.111356  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:18.611389  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:21.111536  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:23.610710  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:25.611361  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:28.111622  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:30.610549  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:32.610684  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:34.612116  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:37.111767  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:39.611741  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:42.111589  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:44.611029  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:47.111644  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:49.611065  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:52.111090  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:54.111164  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:56.611524  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:35:59.112071  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:01.610398  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:03.611163  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:06.111054  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:08.610990  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:10.611355  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:13.111940  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:15.610865  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:17.611051  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:20.111559  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:22.611443  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:25.110897  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:27.111536  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:29.111842  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:31.611346  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:34.111760  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:36.611457  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:38.611614  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:40.611681  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:43.111250  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:45.111518  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:47.611374  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:50.111716  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:52.611710  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:55.111552  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:57.611111  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:36:59.611292  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:01.611646  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:04.112863  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:06.611407  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:08.611765  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:11.111509  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:13.112579  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:15.611537  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:17.611860  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:20.111361  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:22.611306  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:24.611757  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:26.611944  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:29.112062  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:31.611278  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:34.110991  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:36.611232  232840 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:37:36.613526  232840 node_ready.go:38] duration metric: took 4m0.012005111s waiting for node "default-k8s-different-port-20211231103230-6736" to be "Ready" ...
	I1231 10:37:36.616503  232840 out.go:176] 
	W1231 10:37:36.616735  232840 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:37:36.616764  232840 out.go:241] * 
	W1231 10:37:36.617727  232840 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	82faf00c40814       6de166512aa22       7 seconds ago        Running             kindnet-cni               5                   2de2afafee004
	63b83e6e86d6f       6de166512aa22       About a minute ago   Exited              kindnet-cni               4                   2de2afafee004
	bd73d75d2e911       b46c42588d511       4 minutes ago        Running             kube-proxy                0                   92acbaf1e0f9d
	6f1fab877ff5d       b6d7abedde399       4 minutes ago        Running             kube-apiserver            0                   4bf0c162f76ea
	631b3be24dd2b       25f8c7f3da61c       4 minutes ago        Running             etcd                      0                   8150f672d9df2
	a578cbf12a8e4       f51846a4fd288       4 minutes ago        Running             kube-controller-manager   0                   8046160d707ed
	78f1ab230e901       71d575efe6283       4 minutes ago        Running             kube-scheduler            0                   7a99114bf9d50
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-12-31 10:32:51 UTC, end at Fri 2021-12-31 10:37:37 UTC. --
	Dec 31 10:34:46 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:34:46.650971775Z" level=info msg="CreateContainer within sandbox \"2de2afafee0046e35af925d4b9ad9d07f1c71a4591a74f81ec7b10dd8bd4a633\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"d09a1a150a26b2a1fafcaf047e419fe2b681dcee1016ffc3393116bb6a72be46\""
	Dec 31 10:34:46 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:34:46.651631799Z" level=info msg="StartContainer for \"d09a1a150a26b2a1fafcaf047e419fe2b681dcee1016ffc3393116bb6a72be46\""
	Dec 31 10:34:46 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:34:46.807515946Z" level=info msg="StartContainer for \"d09a1a150a26b2a1fafcaf047e419fe2b681dcee1016ffc3393116bb6a72be46\" returns successfully"
	Dec 31 10:34:57 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:34:57.187614220Z" level=info msg="Finish piping stderr of container \"d09a1a150a26b2a1fafcaf047e419fe2b681dcee1016ffc3393116bb6a72be46\""
	Dec 31 10:34:57 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:34:57.187641166Z" level=info msg="Finish piping stdout of container \"d09a1a150a26b2a1fafcaf047e419fe2b681dcee1016ffc3393116bb6a72be46\""
	Dec 31 10:34:57 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:34:57.188387641Z" level=info msg="TaskExit event &TaskExit{ContainerID:d09a1a150a26b2a1fafcaf047e419fe2b681dcee1016ffc3393116bb6a72be46,ID:d09a1a150a26b2a1fafcaf047e419fe2b681dcee1016ffc3393116bb6a72be46,Pid:2149,ExitStatus:2,ExitedAt:2021-12-31 10:34:57.188102 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:34:57 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:34:57.223181617Z" level=info msg="shim disconnected" id=d09a1a150a26b2a1fafcaf047e419fe2b681dcee1016ffc3393116bb6a72be46
	Dec 31 10:34:57 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:34:57.223284153Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:34:57 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:34:57.828209565Z" level=info msg="RemoveContainer for \"55b1624680ec12bc44fdc58768c9941aff7f6596755050fa6011eaa178ae50c0\""
	Dec 31 10:34:57 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:34:57.834077055Z" level=info msg="RemoveContainer for \"55b1624680ec12bc44fdc58768c9941aff7f6596755050fa6011eaa178ae50c0\" returns successfully"
	Dec 31 10:35:48 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:35:48.625853285Z" level=info msg="CreateContainer within sandbox \"2de2afafee0046e35af925d4b9ad9d07f1c71a4591a74f81ec7b10dd8bd4a633\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Dec 31 10:35:48 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:35:48.652418268Z" level=info msg="CreateContainer within sandbox \"2de2afafee0046e35af925d4b9ad9d07f1c71a4591a74f81ec7b10dd8bd4a633\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"63b83e6e86d6f505da0689a06531b98663987ffbd9fbadda5ee9e51e682092e1\""
	Dec 31 10:35:48 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:35:48.654304463Z" level=info msg="StartContainer for \"63b83e6e86d6f505da0689a06531b98663987ffbd9fbadda5ee9e51e682092e1\""
	Dec 31 10:35:48 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:35:48.887374418Z" level=info msg="StartContainer for \"63b83e6e86d6f505da0689a06531b98663987ffbd9fbadda5ee9e51e682092e1\" returns successfully"
	Dec 31 10:35:59 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:35:59.195869733Z" level=info msg="Finish piping stderr of container \"63b83e6e86d6f505da0689a06531b98663987ffbd9fbadda5ee9e51e682092e1\""
	Dec 31 10:35:59 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:35:59.195870153Z" level=info msg="Finish piping stdout of container \"63b83e6e86d6f505da0689a06531b98663987ffbd9fbadda5ee9e51e682092e1\""
	Dec 31 10:35:59 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:35:59.196690177Z" level=info msg="TaskExit event &TaskExit{ContainerID:63b83e6e86d6f505da0689a06531b98663987ffbd9fbadda5ee9e51e682092e1,ID:63b83e6e86d6f505da0689a06531b98663987ffbd9fbadda5ee9e51e682092e1,Pid:2217,ExitStatus:2,ExitedAt:2021-12-31 10:35:59.19635391 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:35:59 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:35:59.231993789Z" level=info msg="shim disconnected" id=63b83e6e86d6f505da0689a06531b98663987ffbd9fbadda5ee9e51e682092e1
	Dec 31 10:35:59 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:35:59.232114334Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:35:59 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:35:59.955849355Z" level=info msg="RemoveContainer for \"d09a1a150a26b2a1fafcaf047e419fe2b681dcee1016ffc3393116bb6a72be46\""
	Dec 31 10:35:59 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:35:59.962860726Z" level=info msg="RemoveContainer for \"d09a1a150a26b2a1fafcaf047e419fe2b681dcee1016ffc3393116bb6a72be46\" returns successfully"
	Dec 31 10:37:30 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:30.625457193Z" level=info msg="CreateContainer within sandbox \"2de2afafee0046e35af925d4b9ad9d07f1c71a4591a74f81ec7b10dd8bd4a633\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:5,}"
	Dec 31 10:37:30 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:30.653652653Z" level=info msg="CreateContainer within sandbox \"2de2afafee0046e35af925d4b9ad9d07f1c71a4591a74f81ec7b10dd8bd4a633\" for &ContainerMetadata{Name:kindnet-cni,Attempt:5,} returns container id \"82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523\""
	Dec 31 10:37:30 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:30.654422181Z" level=info msg="StartContainer for \"82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523\""
	Dec 31 10:37:30 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:30.903113456Z" level=info msg="StartContainer for \"82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20211231103230-6736
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20211231103230-6736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb
	                    minikube.k8s.io/name=default-k8s-different-port-20211231103230-6736
	                    minikube.k8s.io/updated_at=2021_12_31T10_33_24_0700
	                    minikube.k8s.io/version=v1.24.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 Dec 2021 10:33:20 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20211231103230-6736
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 Dec 2021 10:37:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 Dec 2021 10:33:35 +0000   Fri, 31 Dec 2021 10:33:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 Dec 2021 10:33:35 +0000   Fri, 31 Dec 2021 10:33:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 Dec 2021 10:33:35 +0000   Fri, 31 Dec 2021 10:33:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 31 Dec 2021 10:33:35 +0000   Fri, 31 Dec 2021 10:33:16 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    default-k8s-different-port-20211231103230-6736
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	  System UUID:                60ec9bed-9ff2-4db1-b438-2738c19f5f1f
	  Boot ID:                    e6f4bca3-a61e-4e63-a68d-0c73a5c2c999
	  Kernel Version:             5.11.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.12
	  Kubelet Version:            v1.23.1
	  Kube-Proxy Version:         v1.23.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20211231103230-6736                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m9s
	  kube-system                 kindnet-rgq8t                                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-default-k8s-different-port-20211231103230-6736             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20211231103230-6736    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-proxy-z25nr                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-default-k8s-different-port-20211231103230-6736             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 4m                     kube-proxy  
	  Normal  NodeHasSufficientMemory  4m22s (x5 over 4m22s)  kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x5 over 4m22s)  kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x3 over 4m22s)  kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m9s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s                   kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s                   kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s                   kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 da bc c6 d3 d8 08 06
	[  +0.905080] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vetha337dd59
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 d5 c5 d8 9b b7 08 06
	[  +0.767220] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethad0df3e8
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 45 74 46 b2 ed 08 06
	[  +0.175235] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethfa30cc3c
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 f5 8d 27 39 52 08 06
	[  +2.267937] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.022406] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.024047] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.951751] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.011878] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023951] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.963808] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.003832] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023918] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	
	* 
	* ==> etcd [631b3be24dd2b7667fa37162bef86df2c43e5cf7b5aac17096ee8f79bf749549] <==
	* {"level":"info","ts":"2021-12-31T10:33:16.492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2021-12-31T10:33:16.492Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2021-12-31T10:33:16.514Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-12-31T10:33:16.514Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2021-12-31T10:33:16.514Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-12-31T10:33:16.514Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2021-12-31T10:33:16.514Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-12-31T10:33:17.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2021-12-31T10:33:17.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2021-12-31T10:33:17.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2021-12-31T10:33:17.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2021-12-31T10:33:17.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2021-12-31T10:33:17.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2021-12-31T10:33:17.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2021-12-31T10:33:17.139Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:33:17.139Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:33:17.139Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:default-k8s-different-port-20211231103230-6736 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2021-12-31T10:33:17.139Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:33:17.139Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-12-31T10:33:17.139Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-12-31T10:33:17.140Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-12-31T10:33:17.141Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:33:17.141Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:33:17.141Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:33:17.179Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	
	* 
	* ==> kernel <==
	*  10:37:38 up  1:20,  0 users,  load average: 0.77, 1.62, 2.32
	Linux default-k8s-different-port-20211231103230-6736 5.11.0-1023-gcp #25~20.04.1-Ubuntu SMP Mon Nov 15 15:54:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [6f1fab877ff5d05fc062729318a8cb0acd49484779378aa54df1414cc54b604e] <==
	* I1231 10:33:20.079343       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I1231 10:33:20.080322       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I1231 10:33:20.080422       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1231 10:33:20.080482       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1231 10:33:20.080553       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1231 10:33:20.080593       1 cache.go:39] Caches are synced for autoregister controller
	I1231 10:33:20.920577       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1231 10:33:20.920611       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1231 10:33:20.941744       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I1231 10:33:20.945455       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I1231 10:33:20.945476       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I1231 10:33:21.601484       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1231 10:33:21.639873       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1231 10:33:21.720429       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1231 10:33:21.727635       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I1231 10:33:21.728837       1 controller.go:611] quota admission added evaluator for: endpoints
	I1231 10:33:21.735242       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1231 10:33:22.098268       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1231 10:33:23.373002       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1231 10:33:23.385196       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1231 10:33:23.401377       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1231 10:33:28.493494       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1231 10:33:35.559469       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1231 10:33:35.607401       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1231 10:33:37.326422       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [a578cbf12a8e43444eec092845ea14c9a2f682965d287a9801a49dd0a7a8f11f] <==
	* I1231 10:33:34.946962       1 shared_informer.go:247] Caches are synced for GC 
	I1231 10:33:34.947085       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I1231 10:33:34.946811       1 shared_informer.go:247] Caches are synced for cronjob 
	I1231 10:33:34.947601       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I1231 10:33:34.947738       1 shared_informer.go:247] Caches are synced for persistent volume 
	I1231 10:33:34.947910       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I1231 10:33:34.950660       1 shared_informer.go:247] Caches are synced for job 
	I1231 10:33:34.954003       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I1231 10:33:34.956287       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I1231 10:33:34.956482       1 shared_informer.go:247] Caches are synced for PVC protection 
	I1231 10:33:35.121007       1 shared_informer.go:247] Caches are synced for disruption 
	I1231 10:33:35.121046       1 disruption.go:371] Sending events to api server.
	I1231 10:33:35.157911       1 shared_informer.go:247] Caches are synced for resource quota 
	I1231 10:33:35.163115       1 shared_informer.go:247] Caches are synced for resource quota 
	I1231 10:33:35.205005       1 shared_informer.go:247] Caches are synced for stateful set 
	I1231 10:33:35.566040       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I1231 10:33:35.591164       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1231 10:33:35.597663       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1231 10:33:35.597700       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1231 10:33:35.621312       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-z25nr"
	I1231 10:33:35.624088       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rgq8t"
	I1231 10:33:35.860532       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-5t6ck"
	I1231 10:33:35.890918       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-hkl6w"
	I1231 10:33:35.942712       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I1231 10:33:35.986134       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-5t6ck"
	
	* 
	* ==> kube-proxy [bd73d75d2e911560016e85d3108966fc37ee8b4b3657740aad59a2ef691ec869] <==
	* I1231 10:33:37.213196       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I1231 10:33:37.213313       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I1231 10:33:37.213369       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1231 10:33:37.321972       1 server_others.go:206] "Using iptables Proxier"
	I1231 10:33:37.322042       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1231 10:33:37.322055       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1231 10:33:37.322072       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1231 10:33:37.322557       1 server.go:656] "Version info" version="v1.23.1"
	I1231 10:33:37.323369       1 config.go:317] "Starting service config controller"
	I1231 10:33:37.323386       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1231 10:33:37.323454       1 config.go:226] "Starting endpoint slice config controller"
	I1231 10:33:37.323461       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1231 10:33:37.424281       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1231 10:33:37.424342       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [78f1ab230e90175601afd355c7e642dc2f7730a2c82d7591eb54c79c0cd8fdcb] <==
	* W1231 10:33:20.086374       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:33:20.086415       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1231 10:33:20.086485       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:33:20.086591       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1231 10:33:20.087225       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:33:20.087307       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1231 10:33:20.933451       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1231 10:33:20.933497       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1231 10:33:20.977195       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:33:20.977245       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1231 10:33:21.045273       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1231 10:33:21.045313       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1231 10:33:21.102118       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:33:21.102162       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1231 10:33:21.139854       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:33:21.139888       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1231 10:33:21.193715       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1231 10:33:21.193755       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1231 10:33:21.315680       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1231 10:33:21.315727       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1231 10:33:21.315890       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:33:21.315957       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1231 10:33:21.399055       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:33:21.399096       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1231 10:33:21.711242       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-12-31 10:32:51 UTC, end at Fri 2021-12-31 10:37:38 UTC. --
	Dec 31 10:36:25 default-k8s-different-port-20211231103230-6736 kubelet[1282]: I1231 10:36:25.622927    1282 scope.go:110] "RemoveContainer" containerID="63b83e6e86d6f505da0689a06531b98663987ffbd9fbadda5ee9e51e682092e1"
	Dec 31 10:36:25 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:36:25.623225    1282 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-rgq8t_kube-system(9e13c1c9-b40e-4ddc-8914-be909a764fa4)\"" pod="kube-system/kindnet-rgq8t" podUID=9e13c1c9-b40e-4ddc-8914-be909a764fa4
	Dec 31 10:36:28 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:36:28.763546    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:36:33 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:36:33.765024    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:36:37 default-k8s-different-port-20211231103230-6736 kubelet[1282]: I1231 10:36:37.623272    1282 scope.go:110] "RemoveContainer" containerID="63b83e6e86d6f505da0689a06531b98663987ffbd9fbadda5ee9e51e682092e1"
	Dec 31 10:36:37 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:36:37.623728    1282 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-rgq8t_kube-system(9e13c1c9-b40e-4ddc-8914-be909a764fa4)\"" pod="kube-system/kindnet-rgq8t" podUID=9e13c1c9-b40e-4ddc-8914-be909a764fa4
	Dec 31 10:36:38 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:36:38.767121    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:36:43 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:36:43.768971    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:36:48 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:36:48.770732    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:36:49 default-k8s-different-port-20211231103230-6736 kubelet[1282]: I1231 10:36:49.623290    1282 scope.go:110] "RemoveContainer" containerID="63b83e6e86d6f505da0689a06531b98663987ffbd9fbadda5ee9e51e682092e1"
	Dec 31 10:36:49 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:36:49.623593    1282 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-rgq8t_kube-system(9e13c1c9-b40e-4ddc-8914-be909a764fa4)\"" pod="kube-system/kindnet-rgq8t" podUID=9e13c1c9-b40e-4ddc-8914-be909a764fa4
	Dec 31 10:36:53 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:36:53.772519    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:36:58 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:36:58.773480    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:37:02 default-k8s-different-port-20211231103230-6736 kubelet[1282]: I1231 10:37:02.622890    1282 scope.go:110] "RemoveContainer" containerID="63b83e6e86d6f505da0689a06531b98663987ffbd9fbadda5ee9e51e682092e1"
	Dec 31 10:37:02 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:37:02.623327    1282 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-rgq8t_kube-system(9e13c1c9-b40e-4ddc-8914-be909a764fa4)\"" pod="kube-system/kindnet-rgq8t" podUID=9e13c1c9-b40e-4ddc-8914-be909a764fa4
	Dec 31 10:37:03 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:37:03.774174    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:37:08 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:37:08.775371    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:37:13 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:37:13.776831    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:37:16 default-k8s-different-port-20211231103230-6736 kubelet[1282]: I1231 10:37:16.622977    1282 scope.go:110] "RemoveContainer" containerID="63b83e6e86d6f505da0689a06531b98663987ffbd9fbadda5ee9e51e682092e1"
	Dec 31 10:37:16 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:37:16.623542    1282 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-rgq8t_kube-system(9e13c1c9-b40e-4ddc-8914-be909a764fa4)\"" pod="kube-system/kindnet-rgq8t" podUID=9e13c1c9-b40e-4ddc-8914-be909a764fa4
	Dec 31 10:37:18 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:37:18.777968    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:37:23 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:37:23.779662    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:37:28 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:37:28.781330    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:37:30 default-k8s-different-port-20211231103230-6736 kubelet[1282]: I1231 10:37:30.623095    1282 scope.go:110] "RemoveContainer" containerID="63b83e6e86d6f505da0689a06531b98663987ffbd9fbadda5ee9e51e682092e1"
	Dec 31 10:37:33 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:37:33.782179    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20211231103230-6736 -n default-k8s-different-port-20211231103230-6736
helpers_test.go:262: (dbg) Run:  kubectl --context default-k8s-different-port-20211231103230-6736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-64897985d-hkl6w storage-provisioner
helpers_test.go:273: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20211231103230-6736 describe pod coredns-64897985d-hkl6w storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20211231103230-6736 describe pod coredns-64897985d-hkl6w storage-provisioner: exit status 1 (56.46938ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-hkl6w" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context default-k8s-different-port-20211231103230-6736 describe pod coredns-64897985d-hkl6w storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/FirstStart (308.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (485.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20211231102953-6736 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [e552783c-3b9c-46b4-9c1f-506038cae717] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: ***** TestStartStop/group/embed-certs/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:181: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20211231102953-6736 -n embed-certs-20211231102953-6736
start_stop_delete_test.go:181: TestStartStop/group/embed-certs/serial/DeployApp: showing logs for failed pods as of 2021-12-31 10:42:56.575310573 +0000 UTC m=+3675.557126515
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20211231102953-6736 describe po busybox -n default
start_stop_delete_test.go:181: (dbg) kubectl --context embed-certs-20211231102953-6736 describe po busybox -n default:
Name:         busybox
Namespace:    default
Priority:     0
Node:         <none>
Labels:       integration-test=busybox
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z6fqg (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-z6fqg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age               From               Message
----     ------            ----              ----               -------
Warning  FailedScheduling  47s (x8 over 8m)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20211231102953-6736 logs busybox -n default
start_stop_delete_test.go:181: (dbg) kubectl --context embed-certs-20211231102953-6736 logs busybox -n default:
start_stop_delete_test.go:181: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect embed-certs-20211231102953-6736
helpers_test.go:236: (dbg) docker inspect embed-certs-20211231102953-6736:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676",
	        "Created": "2021-12-31T10:30:07.254073431Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 220740,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-12-31T10:30:07.68623588Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/hostname",
	        "HostsPath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/hosts",
	        "LogPath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676-json.log",
	        "Name": "/embed-certs-20211231102953-6736",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-20211231102953-6736:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20211231102953-6736",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e-init/diff:/var/lib/docker/overlay2/c534a67c1f4c5b9f1d9e8fda7874029a0b48633454e02a0b6c810aa9cf70f36b/diff:/var/lib/docker/overlay2/de7a9fc119b7ade7c9f3b24bea5108d56a2268af67d230db0cccbbf8190cc502/diff:/var/lib/docker/overlay2/7dde3b10db81d42c3216201bd8676123e0e230bfa173b31e053af8fa206a800e/diff:/var/lib/docker/overlay2/97454ec819da74666dcc72f8e5b4653886c44f3699cecf13a018ecf95d935273/diff:/var/lib/docker/overlay2/f63f64cf22a9591cbcafeb284a265882ea890544f2b083b747047bcf47a45f11/diff:/var/lib/docker/overlay2/85a2123491b64d14f7493b3bc6a519d9df28cc3680976b9e5a7ecf783be5a304/diff:/var/lib/docker/overlay2/2ec15009b968007c3be12325edca4c12821b52fc616ace66d456038c7f9b8b26/diff:/var/lib/docker/overlay2/c45e9285ae7064da0be5f07d29b035ad54f64e529db00ffe6f2dce6ad543dccf/diff:/var/lib/docker/overlay2/d95000893ca52351a7804b110df43c1656b4bc7fbe4940c9e0ff2744233b1f93/diff:/var/lib/docker/overlay2/38e3bf
07bc49f88e1e7214644d31095a6178f5156c6a5ff3fbcaf8676336a52d/diff:/var/lib/docker/overlay2/fe45a9a5036ea98542c5fd7808e66f8b6ffe9d0c545de700f7cbe39d6280e0e3/diff:/var/lib/docker/overlay2/8cd90b043b30c66cbeacbe4ca21881d85717c6e16f1fe627e2eecdeaf461da6c/diff:/var/lib/docker/overlay2/7221b66a068ae65d9a1c089fe5c60e7230aa950b3d47eb7e90a1776c2a45931b/diff:/var/lib/docker/overlay2/86ce10ecfb90cfd8e1f7d722715353b823ddfa58b93eefa2c1efffff899916ee/diff:/var/lib/docker/overlay2/12ea1316fade88c6e1d2272d6dbeb629875ff96f4dd6f6ead0ff6a4e7e3a3067/diff:/var/lib/docker/overlay2/0fe276873166ac11302901f1bb2c547c71e23fac2197e717e789b6ce7d1eb3ce/diff:/var/lib/docker/overlay2/81bd863acb94ff576a1f1f046ee87693930a12209e915a09f2a720c77d1357f3/diff:/var/lib/docker/overlay2/8875d87537d51e137e2fbff41fe61dadeefa46057e0c93d257ec4754b5ba95f3/diff:/var/lib/docker/overlay2/86e75ae4c708ad2e78d21326ca3b7f552b7903d8c7a0a66e086feaacd3a8c002/diff:/var/lib/docker/overlay2/c688ba4459332409ab260296b96d566d186d307b81308fb3b59724fe6fed3c7d/diff:/var/lib/d
ocker/overlay2/45c4b946a887c9de4f592591b3e65041083baba75d5bed9b0995339de1cafd02/diff:/var/lib/docker/overlay2/3927af3e4dc5e4af94ac76cbe5a3b9759b80e10646aabc655f5c9b56ae13e650/diff:/var/lib/docker/overlay2/11dd3efafe062edcbd4dd837545d570514492786b08abf77f57fe300d225eaa5/diff:/var/lib/docker/overlay2/b8e0f4d5369eb0c633092d9e21501d99ccb27fafaad6667209e1d988c337ae9b/diff:/var/lib/docker/overlay2/c605476b22b48b266163ca6ee5fcb52b15458f1d7a77458390109b6e795f72f3/diff:/var/lib/docker/overlay2/19b31873fbc8858f6b143ea8dc98572f32110c70c29732285bdea89dd6163287/diff:/var/lib/docker/overlay2/171afcaaccc199dc3a653d1bf42453959e7c6bd7bc1e8340f94454b5f5fb6d2d/diff:/var/lib/docker/overlay2/2000b0fa8d671b6e34ec28228440eb7e6a5ddcd6b2cb657836d8522e3fb4454e/diff:/var/lib/docker/overlay2/1294bf56384681b45b611fe07d7b32c6c87a67eee568452d1f8123bf319f7a4c/diff:/var/lib/docker/overlay2/3199bb762e90968c1998250099f781c0e8954f4d62df39a80a384b9208149421/diff:/var/lib/docker/overlay2/036a54cd701c84ee783285b30f6cf33b92858f1a11ebd5db15cffffea12
adcd9/diff:/var/lib/docker/overlay2/552a166f9d1f0d79a7c45c08db35c6db9366e5d8e1321b62ffc3d8a9ea458797/diff:/var/lib/docker/overlay2/98f34e7af675026bd6dd9aaf43b996b5a992e04fe33f238d16d38e7ccad2fb91/diff:/var/lib/docker/overlay2/c1ed97ca4514bd37713c82daa6ba805285f12b5aa51599e9f915b8755fbf55a7/diff:/var/lib/docker/overlay2/86b8ba00550442fc509be298ff07205e99a171d56a70a2e1735cdfd4a9ab745e/diff:/var/lib/docker/overlay2/109092c1424dfee38a4eb63e39a9bb5d3de4b011f84e30bd021bf73ee2f47aa1/diff:/var/lib/docker/overlay2/87fd60ac9e5dd5a701def2c0a9e1c64463bd4779be06a990c86e5561a4349312/diff:/var/lib/docker/overlay2/145545da7fd578da8a41adec4cec3a9b347b4e609c900ee9365ae7735c7859c6/diff:/var/lib/docker/overlay2/e6a20d33de71269404b7fdc1fa6f915dcbc5a337e0625ef4536cb942bb55c19b/diff:/var/lib/docker/overlay2/38829872ccac03d0dae45b93d1f34e61fd179b6fc19f22fd27d4431ee773090f/diff:/var/lib/docker/overlay2/3c7fa7cc3994b36d6b984ebe7c46b535829f02814130486c4ecf5a0976f4f7dc/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20211231102953-6736",
	                "Source": "/var/lib/docker/volumes/embed-certs-20211231102953-6736/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20211231102953-6736",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20211231102953-6736",
	                "name.minikube.sigs.k8s.io": "embed-certs-20211231102953-6736",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e190deddeb5b1d7e9b4481ad93139648183971bf041d59445e4f831398786169",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49397"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49396"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49393"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49395"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49394"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e190deddeb5b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20211231102953-6736": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "de3bee7bab0c",
	                        "embed-certs-20211231102953-6736"
	                    ],
	                    "NetworkID": "821d0d66bcf3a6ca41969ece76bf8b556f86e66628fb90783541e59bdec0e994",
	                    "EndpointID": "493d10b1b399122713b7a745a90f22b6329f172b21c0ede79a67fa2664cc1302",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20211231102953-6736 -n embed-certs-20211231102953-6736
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20211231102953-6736 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-20211231102953-6736 logs -n 25: (1.29922948s)
helpers_test.go:253: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| ssh     | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:22 UTC | Fri, 31 Dec 2021 10:32:22 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| pause   | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:22 UTC | Fri, 31 Dec 2021 10:32:23 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| -p      | enable-default-cni-20211231101406-6736                     | enable-default-cni-20211231101406-6736         | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:24 UTC | Fri, 31 Dec 2021 10:32:25 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| unpause | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:24 UTC | Fri, 31 Dec 2021 10:32:25 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| delete  | -p                                                         | enable-default-cni-20211231101406-6736         | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:26 UTC | Fri, 31 Dec 2021 10:32:29 UTC |
	|         | enable-default-cni-20211231101406-6736                     |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:26 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20211231103229-6736      | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:29 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | disable-driver-mounts-20211231103229-6736                  |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:30 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20211231103230-6736 --memory=2200            | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:30 UTC | Fri, 31 Dec 2021 10:33:37 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:37 UTC | Fri, 31 Dec 2021 10:33:38 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:38 UTC | Fri, 31 Dec 2021 10:33:58 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:58 UTC | Fri, 31 Dec 2021 10:33:58 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20211231103230-6736 --memory=2200            | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:58 UTC | Fri, 31 Dec 2021 10:34:51 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:52 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:53 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                            | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:53 UTC | Fri, 31 Dec 2021 10:34:54 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:54 UTC | Fri, 31 Dec 2021 10:34:55 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:56 UTC | Fri, 31 Dec 2021 10:34:59 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:59 UTC | Fri, 31 Dec 2021 10:35:00 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736             | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:37:37 UTC | Fri, 31 Dec 2021 10:37:38 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:02 UTC | Fri, 31 Dec 2021 10:39:03 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:05 UTC | Fri, 31 Dec 2021 10:39:06 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:06 UTC | Fri, 31 Dec 2021 10:39:07 UTC |
	|         | old-k8s-version-20211231102602-6736                        |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:07 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736                        |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:28 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736                        |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/31 10:39:28
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.17.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1231 10:39:28.297525  248388 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:39:28.297636  248388 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:39:28.297643  248388 out.go:310] Setting ErrFile to fd 2...
	I1231 10:39:28.297648  248388 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:39:28.297773  248388 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:39:28.298053  248388 out.go:304] Setting JSON to false
	I1231 10:39:28.299599  248388 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4923,"bootTime":1640942245,"procs":400,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:39:28.299706  248388 start.go:122] virtualization: kvm guest
	I1231 10:39:28.303369  248388 out.go:176] * [old-k8s-version-20211231102602-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:39:28.306213  248388 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:39:28.303616  248388 notify.go:174] Checking for updates...
	I1231 10:39:28.308369  248388 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:39:28.310826  248388 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:39:28.313682  248388 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:39:28.316049  248388 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:39:28.316742  248388 config.go:176] Loaded profile config "old-k8s-version-20211231102602-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I1231 10:39:28.320866  248388 out.go:176] * Kubernetes 1.23.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.1
	I1231 10:39:28.320932  248388 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:39:28.372088  248388 docker.go:132] docker version: linux-20.10.12
	I1231 10:39:28.372203  248388 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:39:28.488905  248388 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:39:28.412122441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:39:28.489039  248388 docker.go:237] overlay module found
	I1231 10:39:28.491975  248388 out.go:176] * Using the docker driver based on existing profile
	I1231 10:39:28.492011  248388 start.go:280] selected driver: docker
	I1231 10:39:28.492018  248388 start.go:795] validating driver "docker" against &{Name:old-k8s-version-20211231102602-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20211231102602-6736 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra
:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:39:28.492170  248388 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:39:28.492189  248388 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:39:28.492201  248388 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:39:28.492264  248388 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:39:28.492289  248388 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:39:28.494431  248388 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:39:28.495047  248388 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:39:28.596528  248388 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:39:28.528773732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	W1231 10:39:28.596686  248388 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:39:28.596711  248388 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:39:28.599384  248388 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:39:28.599534  248388 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1231 10:39:28.599569  248388 cni.go:93] Creating CNI manager for ""
	I1231 10:39:28.599580  248388 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:39:28.599601  248388 start_flags.go:298] config:
	{Name:old-k8s-version-20211231102602-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20211231102602-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Sched
uledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:39:28.602165  248388 out.go:176] * Starting control plane node old-k8s-version-20211231102602-6736 in cluster old-k8s-version-20211231102602-6736
	I1231 10:39:28.602214  248388 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:39:28.604479  248388 out.go:176] * Pulling base image ...
	I1231 10:39:28.604519  248388 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1231 10:39:28.604576  248388 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I1231 10:39:28.604590  248388 cache.go:57] Caching tarball of preloaded images
	I1231 10:39:28.604620  248388 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:39:28.604864  248388 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:39:28.604881  248388 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I1231 10:39:28.605028  248388 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/config.json ...
	I1231 10:39:28.648185  248388 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:39:28.648212  248388 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:39:28.648221  248388 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:39:28.648290  248388 start.go:313] acquiring machines lock for old-k8s-version-20211231102602-6736: {Name:mk363b8d877fe23a69d731c391a1b6f4ce841b33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:39:28.648398  248388 start.go:317] acquired machines lock for "old-k8s-version-20211231102602-6736" in 81.793µs
	I1231 10:39:28.648427  248388 start.go:93] Skipping create...Using existing machine configuration
	I1231 10:39:28.648436  248388 fix.go:55] fixHost starting: 
	I1231 10:39:28.648678  248388 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:39:28.687124  248388 fix.go:108] recreateIfNeeded on old-k8s-version-20211231102602-6736: state=Stopped err=<nil>
	W1231 10:39:28.687173  248388 fix.go:134] unexpected machine state, will restart: <nil>
	I1231 10:39:28.691855  248388 out.go:176] * Restarting existing docker container for "old-k8s-version-20211231102602-6736" ...
	I1231 10:39:28.691970  248388 cli_runner.go:133] Run: docker start old-k8s-version-20211231102602-6736
	I1231 10:39:29.129996  248388 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:39:29.173538  248388 kic.go:420] container "old-k8s-version-20211231102602-6736" state is running.
	I1231 10:39:29.174075  248388 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20211231102602-6736
	I1231 10:39:29.219092  248388 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/config.json ...
	I1231 10:39:29.219314  248388 machine.go:88] provisioning docker machine ...
	I1231 10:39:29.219347  248388 ubuntu.go:169] provisioning hostname "old-k8s-version-20211231102602-6736"
	I1231 10:39:29.219382  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:39:29.259417  248388 main.go:130] libmachine: Using SSH client type: native
	I1231 10:39:29.259602  248388 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49422 <nil> <nil>}
	I1231 10:39:29.259620  248388 main.go:130] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20211231102602-6736 && echo "old-k8s-version-20211231102602-6736" | sudo tee /etc/hostname
	I1231 10:39:29.260468  248388 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54766->127.0.0.1:49422: read: connection reset by peer
	I1231 10:39:32.408132  248388 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20211231102602-6736
	
	I1231 10:39:32.408224  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:39:32.452034  248388 main.go:130] libmachine: Using SSH client type: native
	I1231 10:39:32.452295  248388 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49422 <nil> <nil>}
	I1231 10:39:32.452329  248388 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20211231102602-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20211231102602-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20211231102602-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:39:32.592974  248388 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:39:32.593020  248388 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:39:32.593043  248388 ubuntu.go:177] setting up certificates
	I1231 10:39:32.593054  248388 provision.go:83] configureAuth start
	I1231 10:39:32.593097  248388 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20211231102602-6736
	I1231 10:39:32.631818  248388 provision.go:138] copyHostCerts
	I1231 10:39:32.631883  248388 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:39:32.631890  248388 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:39:32.631953  248388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:39:32.632060  248388 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:39:32.632071  248388 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:39:32.632110  248388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:39:32.632180  248388 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:39:32.632189  248388 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:39:32.632208  248388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:39:32.632302  248388 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20211231102602-6736 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20211231102602-6736]
	I1231 10:39:33.171522  248388 provision.go:172] copyRemoteCerts
	I1231 10:39:33.171593  248388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:39:33.171626  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:39:33.215114  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:39:33.313197  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:39:33.336105  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I1231 10:39:33.356635  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1231 10:39:33.379375  248388 provision.go:86] duration metric: configureAuth took 786.294314ms
	I1231 10:39:33.379494  248388 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:39:33.379778  248388 config.go:176] Loaded profile config "old-k8s-version-20211231102602-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I1231 10:39:33.379801  248388 machine.go:91] provisioned docker machine in 4.160462173s
	I1231 10:39:33.379812  248388 start.go:267] post-start starting for "old-k8s-version-20211231102602-6736" (driver="docker")
	I1231 10:39:33.379817  248388 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:39:33.379857  248388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:39:33.379894  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:39:33.419775  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:39:33.517404  248388 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:39:33.521457  248388 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:39:33.521489  248388 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:39:33.521498  248388 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:39:33.521503  248388 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:39:33.521516  248388 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:39:33.521566  248388 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:39:33.521628  248388 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:39:33.521709  248388 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:39:33.529871  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:39:33.549872  248388 start.go:270] post-start completed in 170.044596ms
	I1231 10:39:33.549940  248388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:39:33.549978  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:39:33.589440  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:39:33.685438  248388 fix.go:57] fixHost completed within 5.036996865s
	I1231 10:39:33.685483  248388 start.go:80] releasing machines lock for "old-k8s-version-20211231102602-6736", held for 5.037064541s
	I1231 10:39:33.685596  248388 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20211231102602-6736
	I1231 10:39:33.725054  248388 ssh_runner.go:195] Run: systemctl --version
	I1231 10:39:33.725110  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:39:33.725151  248388 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:39:33.725205  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:39:33.764466  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:39:33.765016  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:39:33.861869  248388 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:39:33.891797  248388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:39:33.904477  248388 docker.go:158] disabling docker service ...
	I1231 10:39:33.904532  248388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:39:33.916366  248388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:39:33.927743  248388 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:39:34.013024  248388 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:39:34.091153  248388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:39:34.102021  248388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:39:34.116459  248388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:39:34.131061  248388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:39:34.138871  248388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:39:34.147361  248388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:39:34.228786  248388 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:39:34.310283  248388 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:39:34.310366  248388 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:39:34.316663  248388 start.go:458] Will wait 60s for crictl version
	I1231 10:39:34.316739  248388 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:39:34.347621  248388 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:39:34Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:39:45.394578  248388 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:39:45.422381  248388 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:39:45.422454  248388 ssh_runner.go:195] Run: containerd --version
	I1231 10:39:45.445888  248388 ssh_runner.go:195] Run: containerd --version
	I1231 10:39:45.473645  248388 out.go:176] * Preparing Kubernetes v1.16.0 on containerd 1.4.12 ...
	I1231 10:39:45.473739  248388 cli_runner.go:133] Run: docker network inspect old-k8s-version-20211231102602-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:39:45.512179  248388 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1231 10:39:45.516345  248388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:39:45.530412  248388 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:39:45.532672  248388 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:39:45.534826  248388 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:39:45.534904  248388 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1231 10:39:45.534972  248388 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:39:45.562331  248388 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:39:45.562363  248388 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:39:45.562405  248388 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:39:45.588904  248388 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:39:45.588927  248388 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:39:45.588971  248388 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:39:45.617144  248388 cni.go:93] Creating CNI manager for ""
	I1231 10:39:45.617169  248388 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:39:45.617187  248388 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:39:45.617200  248388 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20211231102602-6736 NodeName:old-k8s-version-20211231102602-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:39:45.617337  248388 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-20211231102602-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20211231102602-6736
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.49.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:39:45.617416  248388 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=old-k8s-version-20211231102602-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20211231102602-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1231 10:39:45.617471  248388 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1231 10:39:45.626756  248388 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:39:45.626830  248388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:39:45.634969  248388 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (642 bytes)
	I1231 10:39:45.650571  248388 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1231 10:39:45.667126  248388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I1231 10:39:45.684267  248388 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:39:45.687751  248388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:39:45.698181  248388 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736 for IP: 192.168.49.2
	I1231 10:39:45.698295  248388 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:39:45.698331  248388 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:39:45.698394  248388 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.key
	I1231 10:39:45.698446  248388 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/apiserver.key.dd3b5fb2
	I1231 10:39:45.698482  248388 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/proxy-client.key
	I1231 10:39:45.698570  248388 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:39:45.698600  248388 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:39:45.698611  248388 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:39:45.698633  248388 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:39:45.698653  248388 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:39:45.698673  248388 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:39:45.698710  248388 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:39:45.699579  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:39:45.721393  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1231 10:39:45.741875  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:39:45.762168  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1231 10:39:45.782716  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:39:45.803141  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:39:45.824081  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:39:45.844631  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:39:45.865581  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:39:45.888196  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:39:45.911552  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:39:45.932076  248388 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:39:45.947761  248388 ssh_runner.go:195] Run: openssl version
	I1231 10:39:45.953604  248388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:39:45.964111  248388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:39:45.968075  248388 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:39:45.968153  248388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:39:45.974160  248388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:39:45.983159  248388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:39:45.992274  248388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:39:45.996413  248388 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:39:45.996467  248388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:39:46.002116  248388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:39:46.010648  248388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:39:46.019736  248388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:39:46.024515  248388 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:39:46.024587  248388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:39:46.030635  248388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:39:46.039396  248388 kubeadm.go:388] StartCluster: {Name:old-k8s-version-20211231102602-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20211231102602-6736 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true n
ode_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:39:46.039522  248388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:39:46.039635  248388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:39:46.068660  248388 cri.go:87] found id: "9711ffb10b897aecfb60ff957702f91f87cdd75e0701725914ee129e8c6799cb"
	I1231 10:39:46.068685  248388 cri.go:87] found id: "91f9570ac59622f25f47f9d8bf9cfa1ecd2c14cb7722777629a959e7f187d512"
	I1231 10:39:46.068691  248388 cri.go:87] found id: "090a101afa0e5be4c178038538c1438ae269f1339bb853fc4beb2973fd8f69c6"
	I1231 10:39:46.068695  248388 cri.go:87] found id: "a0fea282c2cab22ac98fca2724b604a81ad02188a7148a610d923ce9704541fe"
	I1231 10:39:46.068699  248388 cri.go:87] found id: "fddc6f96e1ab6aff7257a3f3e9e946ae7b0d808bbca6e09ffc2653e63aa5c9e4"
	I1231 10:39:46.068704  248388 cri.go:87] found id: "c5161903fa79820ba4aac6aae4e2aa2335944ccae08a80bec50f7a09bcb290a0"
	I1231 10:39:46.068708  248388 cri.go:87] found id: ""
	I1231 10:39:46.068754  248388 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1231 10:39:46.086307  248388 cri.go:114] JSON = null
	W1231 10:39:46.086363  248388 kubeadm.go:395] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I1231 10:39:46.086418  248388 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:39:46.094669  248388 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1231 10:39:46.102668  248388 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:46.103765  248388 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20211231102602-6736" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:39:46.104304  248388 kubeconfig.go:127] "old-k8s-version-20211231102602-6736" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig - will repair!
	I1231 10:39:46.105141  248388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:39:46.107623  248388 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1231 10:39:46.116159  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:46.116212  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:46.133979  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:46.334456  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:46.334533  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:46.350538  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:46.534837  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:46.534919  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:46.550816  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:46.735119  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:46.735239  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:46.753056  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:46.934288  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:46.934363  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:46.951681  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:47.135023  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:47.135101  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:47.152550  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:47.335029  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:47.335115  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:47.351854  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:47.534112  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:47.534201  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:47.550587  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:47.734908  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:47.734983  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:47.750831  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:47.934079  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:47.934168  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:47.951066  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:48.134297  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:48.134366  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:48.149419  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:48.334062  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:48.334157  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:48.350555  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:48.534878  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:48.534992  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:48.551106  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:48.734409  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:48.734487  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:48.751378  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:48.934729  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:48.934886  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:48.952549  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:49.134824  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:49.134924  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:49.150693  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:49.150726  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:49.150765  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:49.166744  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W1231 10:39:49.166812  248388 kubeadm.go:600] needs reconfigure: apiserver error: timed out waiting for the condition
	I1231 10:39:49.166837  248388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1231 10:39:49.915167  248388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:39:49.927449  248388 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:39:49.935702  248388 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:39:49.935784  248388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:39:49.945133  248388 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:39:49.945201  248388 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:40:01.197512  248388 out.go:203]   - Generating certificates and keys ...
	I1231 10:40:01.199961  248388 out.go:203]   - Booting up control plane ...
	I1231 10:40:01.203074  248388 out.go:203]   - Configuring RBAC rules ...
	I1231 10:40:01.206183  248388 cni.go:93] Creating CNI manager for ""
	I1231 10:40:01.206224  248388 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:40:01.208451  248388 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:40:01.208540  248388 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:40:01.212803  248388 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I1231 10:40:01.212831  248388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:40:01.227527  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:40:01.503411  248388 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:40:01.503539  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=old-k8s-version-20211231102602-6736 minikube.k8s.io/updated_at=2021_12_31T10_40_01_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:01.503559  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:01.617474  248388 ops.go:34] apiserver oom_adj: -16
	I1231 10:40:01.617597  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:02.286294  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:02.785910  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:03.285879  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:03.786120  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:04.286213  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:04.785628  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:05.286526  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:05.785916  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:06.285927  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:06.786520  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:07.286634  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:07.785622  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:08.285621  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:08.785980  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:09.285657  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:09.785778  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:10.286662  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:10.786025  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:11.286281  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:11.786574  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:12.285800  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:12.786532  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:13.286195  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:13.785848  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:14.286306  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:14.785834  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:15.286191  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:15.786611  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:16.285813  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:16.786057  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:17.183895  248388 kubeadm.go:864] duration metric: took 15.680328365s to wait for elevateKubeSystemPrivileges.
	I1231 10:40:17.184024  248388 kubeadm.go:390] StartCluster complete in 31.144629299s
	I1231 10:40:17.184054  248388 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:40:17.184188  248388 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:40:17.186479  248388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:40:17.705951  248388 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20211231102602-6736" rescaled to 1
	I1231 10:40:17.706014  248388 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ControlPlane:true Worker:true}
	I1231 10:40:17.708934  248388 out.go:176] * Verifying Kubernetes components...
	I1231 10:40:17.706076  248388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:40:17.709017  248388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:40:17.706087  248388 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I1231 10:40:17.709130  248388 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20211231102602-6736"
	I1231 10:40:17.709154  248388 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20211231102602-6736"
	W1231 10:40:17.709171  248388 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:40:17.709180  248388 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20211231102602-6736"
	I1231 10:40:17.709197  248388 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-20211231102602-6736"
	I1231 10:40:17.709204  248388 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20211231102602-6736"
	I1231 10:40:17.709207  248388 host.go:66] Checking if "old-k8s-version-20211231102602-6736" exists ...
	I1231 10:40:17.709214  248388 addons.go:153] Setting addon metrics-server=true in "old-k8s-version-20211231102602-6736"
	W1231 10:40:17.709224  248388 addons.go:165] addon metrics-server should already be in state true
	I1231 10:40:17.709252  248388 host.go:66] Checking if "old-k8s-version-20211231102602-6736" exists ...
	I1231 10:40:17.709546  248388 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:40:17.709679  248388 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:40:17.709707  248388 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:40:17.709180  248388 addons.go:65] Setting dashboard=true in profile "old-k8s-version-20211231102602-6736"
	I1231 10:40:17.709815  248388 addons.go:153] Setting addon dashboard=true in "old-k8s-version-20211231102602-6736"
	W1231 10:40:17.709831  248388 addons.go:165] addon dashboard should already be in state true
	I1231 10:40:17.709863  248388 host.go:66] Checking if "old-k8s-version-20211231102602-6736" exists ...
	I1231 10:40:17.706302  248388 config.go:176] Loaded profile config "old-k8s-version-20211231102602-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I1231 10:40:17.710310  248388 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:40:17.779917  248388 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:40:17.782452  248388 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1231 10:40:17.780138  248388 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:40:17.782516  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:40:17.782593  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:40:17.785955  248388 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1231 10:40:17.786036  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1231 10:40:17.786046  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1231 10:40:17.786103  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:40:17.786394  248388 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20211231102602-6736"
	W1231 10:40:17.786421  248388 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:40:17.786450  248388 host.go:66] Checking if "old-k8s-version-20211231102602-6736" exists ...
	I1231 10:40:17.786855  248388 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:40:17.791798  248388 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1231 10:40:17.791923  248388 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1231 10:40:17.791936  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1231 10:40:17.792036  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:40:17.849271  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:40:17.850285  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:40:17.858679  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:40:17.864118  248388 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:40:17.864145  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:40:17.864187  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:40:17.908918  248388 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20211231102602-6736" to be "Ready" ...
	I1231 10:40:17.909113  248388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:40:17.911266  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:40:18.000420  248388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:40:18.193571  248388 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1231 10:40:18.193674  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1231 10:40:18.199990  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1231 10:40:18.200020  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1231 10:40:18.285365  248388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:40:18.380538  248388 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1231 10:40:18.380663  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1231 10:40:18.381915  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1231 10:40:18.381967  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1231 10:40:18.479807  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1231 10:40:18.479844  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1231 10:40:18.483146  248388 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:40:18.483186  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1231 10:40:18.506042  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1231 10:40:18.506075  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1231 10:40:18.507973  248388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:40:18.587116  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1231 10:40:18.587147  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1231 10:40:18.608838  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1231 10:40:18.608870  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1231 10:40:18.702834  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1231 10:40:18.702926  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1231 10:40:18.781339  248388 start.go:773] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I1231 10:40:18.799607  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1231 10:40:18.799644  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1231 10:40:18.889031  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:40:18.889104  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1231 10:40:18.912177  248388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:40:19.001324  248388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.00085472s)
	I1231 10:40:19.487581  248388 addons.go:386] Verifying addon metrics-server=true in "old-k8s-version-20211231102602-6736"
	I1231 10:40:19.890512  248388 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1231 10:40:19.890555  248388 addons.go:417] enableAddons completed in 2.184473142s
	I1231 10:40:19.918068  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:22.417255  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:24.417743  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:26.916948  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:28.917781  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:31.417404  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:33.917310  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:36.417381  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:38.417560  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:40.916598  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:42.916916  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:44.917164  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:47.416836  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:49.916761  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:51.917072  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:53.917106  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:55.918552  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:58.418103  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:00.917229  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:03.417718  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:05.417993  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:07.916832  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:09.917062  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:11.917659  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:14.416640  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:16.417274  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:18.417736  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:20.916549  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:22.917390  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:24.917495  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:27.416717  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:29.920179  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:32.416641  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:34.917102  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:36.917265  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:39.417341  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:41.917725  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:44.417286  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:46.916626  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:48.917674  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:51.417587  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:53.916892  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:55.917634  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:58.417650  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:00.916883  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:02.917451  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:04.917545  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:07.416653  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:09.417665  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:11.916571  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:13.917242  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:16.417113  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:18.417745  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:20.916702  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:22.917430  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:25.417573  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:27.417667  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:29.918025  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:32.417484  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:34.417793  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:36.917449  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:38.917872  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:41.419241  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:43.918100  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:46.417949  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:48.917157  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:50.917643  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	03de7bcc0efa5       6de166512aa22       3 minutes ago       Exited              kindnet-cni               3                   44dd51a088ce3
	a380c0d98153c       b46c42588d511       12 minutes ago      Running             kube-proxy                0                   835b47ba02211
	7de9215e7da17       25f8c7f3da61c       12 minutes ago      Running             etcd                      0                   e2c8947bfc291
	bd3c847642a9f       f51846a4fd288       12 minutes ago      Running             kube-controller-manager   0                   ea2c5e2434c34
	4d064efc3679b       71d575efe6283       12 minutes ago      Running             kube-scheduler            0                   c1141691949f5
	eb97b3087125d       b6d7abedde399       12 minutes ago      Running             kube-apiserver            0                   0d2541a270208
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-12-31 10:30:08 UTC, end at Fri 2021-12-31 10:42:57 UTC. --
	Dec 31 10:36:14 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:36:14.333839986Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:36:15 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:36:15.016200795Z" level=info msg="RemoveContainer for \"a0dcfaded27988532fd2f6cf67643d7ef3aa93cb98da8884fe6d6e539438d20a\""
	Dec 31 10:36:15 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:36:15.026407893Z" level=info msg="RemoveContainer for \"a0dcfaded27988532fd2f6cf67643d7ef3aa93cb98da8884fe6d6e539438d20a\" returns successfully"
	Dec 31 10:36:28 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:36:28.329678822Z" level=info msg="CreateContainer within sandbox \"44dd51a088ce33bf525525cb8330710d2c26acdbd4f05e817316f4c87dd3a31a\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Dec 31 10:36:28 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:36:28.355748830Z" level=info msg="CreateContainer within sandbox \"44dd51a088ce33bf525525cb8330710d2c26acdbd4f05e817316f4c87dd3a31a\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"4e4e4a655756263037cc7f2afeca986a6c3db7a8f78dfeccf567f40db806da08\""
	Dec 31 10:36:28 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:36:28.356461697Z" level=info msg="StartContainer for \"4e4e4a655756263037cc7f2afeca986a6c3db7a8f78dfeccf567f40db806da08\""
	Dec 31 10:36:28 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:36:28.608833259Z" level=info msg="StartContainer for \"4e4e4a655756263037cc7f2afeca986a6c3db7a8f78dfeccf567f40db806da08\" returns successfully"
	Dec 31 10:39:08 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:39:08.895668234Z" level=info msg="Finish piping stdout of container \"4e4e4a655756263037cc7f2afeca986a6c3db7a8f78dfeccf567f40db806da08\""
	Dec 31 10:39:08 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:39:08.895725113Z" level=info msg="Finish piping stderr of container \"4e4e4a655756263037cc7f2afeca986a6c3db7a8f78dfeccf567f40db806da08\""
	Dec 31 10:39:08 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:39:08.896740590Z" level=info msg="TaskExit event &TaskExit{ContainerID:4e4e4a655756263037cc7f2afeca986a6c3db7a8f78dfeccf567f40db806da08,ID:4e4e4a655756263037cc7f2afeca986a6c3db7a8f78dfeccf567f40db806da08,Pid:2431,ExitStatus:2,ExitedAt:2021-12-31 10:39:08.896124289 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:39:08 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:39:08.928871440Z" level=info msg="shim disconnected" id=4e4e4a655756263037cc7f2afeca986a6c3db7a8f78dfeccf567f40db806da08
	Dec 31 10:39:08 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:39:08.928974198Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:39:09 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:39:09.339925263Z" level=info msg="RemoveContainer for \"7ea8e6db9c1e87be1aa50758a859dff16936dc01f47b666335efb3ef53644353\""
	Dec 31 10:39:09 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:39:09.347839888Z" level=info msg="RemoveContainer for \"7ea8e6db9c1e87be1aa50758a859dff16936dc01f47b666335efb3ef53644353\" returns successfully"
	Dec 31 10:39:38 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:39:38.329032821Z" level=info msg="CreateContainer within sandbox \"44dd51a088ce33bf525525cb8330710d2c26acdbd4f05e817316f4c87dd3a31a\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Dec 31 10:39:38 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:39:38.369738498Z" level=info msg="CreateContainer within sandbox \"44dd51a088ce33bf525525cb8330710d2c26acdbd4f05e817316f4c87dd3a31a\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f\""
	Dec 31 10:39:38 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:39:38.370325382Z" level=info msg="StartContainer for \"03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f\""
	Dec 31 10:39:38 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:39:38.585144696Z" level=info msg="StartContainer for \"03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f\" returns successfully"
	Dec 31 10:42:18 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:42:18.886900757Z" level=info msg="Finish piping stdout of container \"03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f\""
	Dec 31 10:42:18 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:42:18.886910043Z" level=info msg="Finish piping stderr of container \"03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f\""
	Dec 31 10:42:18 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:42:18.887710154Z" level=info msg="TaskExit event &TaskExit{ContainerID:03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f,ID:03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f,Pid:2522,ExitStatus:2,ExitedAt:2021-12-31 10:42:18.88746567 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:42:18 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:42:18.914834998Z" level=info msg="shim disconnected" id=03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f
	Dec 31 10:42:18 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:42:18.915125017Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:42:19 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:42:19.689443660Z" level=info msg="RemoveContainer for \"4e4e4a655756263037cc7f2afeca986a6c3db7a8f78dfeccf567f40db806da08\""
	Dec 31 10:42:19 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:42:19.695366956Z" level=info msg="RemoveContainer for \"4e4e4a655756263037cc7f2afeca986a6c3db7a8f78dfeccf567f40db806da08\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20211231102953-6736
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20211231102953-6736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb
	                    minikube.k8s.io/name=embed-certs-20211231102953-6736
	                    minikube.k8s.io/updated_at=2021_12_31T10_30_39_0700
	                    minikube.k8s.io/version=v1.24.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 Dec 2021 10:30:35 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20211231102953-6736
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 Dec 2021 10:42:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 Dec 2021 10:41:05 +0000   Fri, 31 Dec 2021 10:30:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 Dec 2021 10:41:05 +0000   Fri, 31 Dec 2021 10:30:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 Dec 2021 10:41:05 +0000   Fri, 31 Dec 2021 10:30:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 31 Dec 2021 10:41:05 +0000   Fri, 31 Dec 2021 10:30:32 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20211231102953-6736
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	  System UUID:                df6948c7-cd35-4573-a0b7-f7c0ae501659
	  Boot ID:                    e6f4bca3-a61e-4e63-a68d-0c73a5c2c999
	  Kernel Version:             5.11.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.12
	  Kubelet Version:            v1.23.1
	  Kube-Proxy Version:         v1.23.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20211231102953-6736                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-2gpsc                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-embed-certs-20211231102953-6736             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-embed-certs-20211231102953-6736    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-jfhh7                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-embed-certs-20211231102953-6736             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 12m                kube-proxy  
	  Normal  Starting                 12m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x3 over 12m)  kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x3 over 12m)  kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x2 over 12m)  kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 da bc c6 d3 d8 08 06
	[  +0.905080] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vetha337dd59
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 d5 c5 d8 9b b7 08 06
	[  +0.767220] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethad0df3e8
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 45 74 46 b2 ed 08 06
	[  +0.175235] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethfa30cc3c
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 f5 8d 27 39 52 08 06
	[  +2.267937] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.022406] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.024047] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.951751] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.011878] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023951] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.963808] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.003832] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023918] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	
	* 
	* ==> etcd [7de9215e7da17128bb40a41f2f520ecf92f3eb1e98a7c3510a228471a97c4f2f] <==
	* {"level":"info","ts":"2021-12-31T10:30:32.208Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2021-12-31T10:30:32.208Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:embed-certs-20211231102953-6736 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2021-12-31T10:30:32.209Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:30:32.209Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:30:32.209Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-12-31T10:30:32.209Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-12-31T10:30:32.209Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:30:32.210Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:30:32.210Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:30:32.210Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:30:32.210Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2021-12-31T10:30:32.211Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2021-12-31T10:32:36.853Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.318119ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3238508214417130916 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.58.2\" mod_revision:509 > success:<request_put:<key:\"/registry/masterleases/192.168.58.2\" value_size:67 lease:3238508214417130914 >> failure:<request_range:<key:\"/registry/masterleases/192.168.58.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2021-12-31T10:32:36.853Z","caller":"traceutil/trace.go:171","msg":"trace[1858985268] transaction","detail":"{read_only:false; response_revision:511; number_of_response:1; }","duration":"156.695597ms","start":"2021-12-31T10:32:36.696Z","end":"2021-12-31T10:32:36.853Z","steps":["trace[1858985268] 'process raft request'  (duration: 53.739286ms)","trace[1858985268] 'compare'  (duration: 102.215258ms)"],"step_count":2}
	{"level":"warn","ts":"2021-12-31T10:32:37.266Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"133.925799ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-20211231102953-6736\" ","response":"range_response_count:1 size:4942"}
	{"level":"info","ts":"2021-12-31T10:32:37.266Z","caller":"traceutil/trace.go:171","msg":"trace[619865893] range","detail":"{range_begin:/registry/minions/embed-certs-20211231102953-6736; range_end:; response_count:1; response_revision:511; }","duration":"134.035118ms","start":"2021-12-31T10:32:37.132Z","end":"2021-12-31T10:32:37.266Z","steps":["trace[619865893] 'range keys from in-memory index tree'  (duration: 133.800618ms)"],"step_count":1}
	{"level":"warn","ts":"2021-12-31T10:32:46.740Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"108.769669ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-20211231102953-6736\" ","response":"range_response_count:1 size:4942"}
	{"level":"info","ts":"2021-12-31T10:32:46.740Z","caller":"traceutil/trace.go:171","msg":"trace[1529683504] range","detail":"{range_begin:/registry/minions/embed-certs-20211231102953-6736; range_end:; response_count:1; response_revision:512; }","duration":"108.867526ms","start":"2021-12-31T10:32:46.632Z","end":"2021-12-31T10:32:46.740Z","steps":["trace[1529683504] 'range keys from in-memory index tree'  (duration: 108.625105ms)"],"step_count":1}
	{"level":"warn","ts":"2021-12-31T10:32:47.049Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"151.307928ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3238508214417130964 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.58.2\" mod_revision:511 > success:<request_put:<key:\"/registry/masterleases/192.168.58.2\" value_size:67 lease:3238508214417130962 >> failure:<request_range:<key:\"/registry/masterleases/192.168.58.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2021-12-31T10:32:47.049Z","caller":"traceutil/trace.go:171","msg":"trace[1806958756] transaction","detail":"{read_only:false; response_revision:513; number_of_response:1; }","duration":"302.801998ms","start":"2021-12-31T10:32:46.746Z","end":"2021-12-31T10:32:47.049Z","steps":["trace[1806958756] 'process raft request'  (duration: 151.330883ms)","trace[1806958756] 'compare'  (duration: 151.186541ms)"],"step_count":2}
	{"level":"warn","ts":"2021-12-31T10:32:47.049Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-12-31T10:32:46.746Z","time spent":"302.874046ms","remote":"127.0.0.1:48400","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.58.2\" mod_revision:511 > success:<request_put:<key:\"/registry/masterleases/192.168.58.2\" value_size:67 lease:3238508214417130962 >> failure:<request_range:<key:\"/registry/masterleases/192.168.58.2\" > >"}
	{"level":"warn","ts":"2021-12-31T10:32:47.395Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"217.090228ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-12-31T10:32:47.395Z","caller":"traceutil/trace.go:171","msg":"trace[768014922] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:513; }","duration":"217.205911ms","start":"2021-12-31T10:32:47.178Z","end":"2021-12-31T10:32:47.395Z","steps":["trace[768014922] 'range keys from in-memory index tree'  (duration: 216.99865ms)"],"step_count":1}
	{"level":"info","ts":"2021-12-31T10:40:32.728Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":562}
	{"level":"info","ts":"2021-12-31T10:40:32.729Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":562,"took":"586.656µs"}
	
	* 
	* ==> kernel <==
	*  10:42:58 up  1:25,  0 users,  load average: 0.92, 1.02, 1.83
	Linux embed-certs-20211231102953-6736 5.11.0-1023-gcp #25~20.04.1-Ubuntu SMP Mon Nov 15 15:54:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [eb97b3087125dceba3df5197317e35791b4ca2f794effdfa4d5118543d9d3072] <==
	* I1231 10:30:35.080871       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1231 10:30:35.086357       1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1231 10:30:35.089831       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I1231 10:30:35.105027       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I1231 10:30:35.185467       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I1231 10:30:35.190675       1 controller.go:611] quota admission added evaluator for: namespaces
	I1231 10:30:35.962970       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1231 10:30:35.963003       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1231 10:30:35.970498       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I1231 10:30:35.974237       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I1231 10:30:35.974274       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I1231 10:30:36.585265       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1231 10:30:36.625041       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1231 10:30:36.698601       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1231 10:30:36.708729       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1231 10:30:36.709846       1 controller.go:611] quota admission added evaluator for: endpoints
	I1231 10:30:36.714558       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1231 10:30:37.200677       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1231 10:30:38.041816       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1231 10:30:38.051264       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1231 10:30:38.081752       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1231 10:30:43.185304       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1231 10:30:51.624551       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1231 10:30:51.823628       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1231 10:30:52.493305       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [bd3c847642a9fe6d581ed824b26b0b73d8344e03d63122f560cf62e61a262cb3] <==
	* I1231 10:30:51.142552       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I1231 10:30:51.143335       1 event.go:294] "Event occurred" object="embed-certs-20211231102953-6736" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node embed-certs-20211231102953-6736 event: Registered Node embed-certs-20211231102953-6736 in Controller"
	I1231 10:30:51.147804       1 range_allocator.go:374] Set node embed-certs-20211231102953-6736 PodCIDR to [10.244.0.0/24]
	I1231 10:30:51.153429       1 event.go:294] "Event occurred" object="kube-system/kube-scheduler-embed-certs-20211231102953-6736" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1231 10:30:51.153457       1 event.go:294] "Event occurred" object="kube-system/etcd-embed-certs-20211231102953-6736" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1231 10:30:51.154844       1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-embed-certs-20211231102953-6736" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1231 10:30:51.158578       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager-embed-certs-20211231102953-6736" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1231 10:30:51.179056       1 shared_informer.go:247] Caches are synced for service account 
	I1231 10:30:51.199625       1 shared_informer.go:247] Caches are synced for attach detach 
	I1231 10:30:51.218729       1 shared_informer.go:247] Caches are synced for disruption 
	I1231 10:30:51.218771       1 disruption.go:371] Sending events to api server.
	I1231 10:30:51.219917       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I1231 10:30:51.271457       1 shared_informer.go:247] Caches are synced for cronjob 
	I1231 10:30:51.324558       1 shared_informer.go:247] Caches are synced for resource quota 
	I1231 10:30:51.331860       1 shared_informer.go:247] Caches are synced for resource quota 
	I1231 10:30:51.635933       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jfhh7"
	I1231 10:30:51.641599       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2gpsc"
	I1231 10:30:51.747935       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1231 10:30:51.792036       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1231 10:30:51.792069       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1231 10:30:51.830856       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I1231 10:30:52.031176       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-8fwps"
	I1231 10:30:52.080996       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-65b6p"
	I1231 10:30:52.526990       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I1231 10:30:52.594186       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-8fwps"
	
	* 
	* ==> kube-proxy [a380c0d98153cc2ab6095cc8b6eaf0192543775721df6c9cb6c5e4e510d1636d] <==
	* I1231 10:30:52.397068       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I1231 10:30:52.397149       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I1231 10:30:52.397183       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1231 10:30:52.487490       1 server_others.go:206] "Using iptables Proxier"
	I1231 10:30:52.487554       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1231 10:30:52.487568       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1231 10:30:52.487602       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1231 10:30:52.488071       1 server.go:656] "Version info" version="v1.23.1"
	I1231 10:30:52.489621       1 config.go:226] "Starting endpoint slice config controller"
	I1231 10:30:52.489639       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1231 10:30:52.489749       1 config.go:317] "Starting service config controller"
	I1231 10:30:52.489756       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1231 10:30:52.590564       1 shared_informer.go:247] Caches are synced for service config 
	I1231 10:30:52.590627       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [4d064efc3679beff93c0b83a48f6fdc82cb98e282e856beb780502ca855801a3] <==
	* E1231 10:30:35.193161       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1231 10:30:35.193184       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:30:35.193230       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1231 10:30:35.193236       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1231 10:30:35.196995       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1231 10:30:35.193715       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1231 10:30:35.197155       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1231 10:30:35.999536       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1231 10:30:35.999579       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1231 10:30:36.135168       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1231 10:30:36.135211       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1231 10:30:36.140141       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1231 10:30:36.140211       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1231 10:30:36.180265       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:30:36.180311       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1231 10:30:36.180317       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1231 10:30:36.180351       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1231 10:30:36.186253       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:30:36.186304       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1231 10:30:36.323948       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1231 10:30:36.323983       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1231 10:30:36.323990       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:30:36.324013       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1231 10:30:36.802803       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1231 10:30:37.896209       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-12-31 10:30:08 UTC, end at Fri 2021-12-31 10:42:58 UTC. --
	Dec 31 10:41:28 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:41:28.588520    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:41:33 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:41:33.590137    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:41:38 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:41:38.591498    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:41:43 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:41:43.592645    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:41:48 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:41:48.593367    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:41:53 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:41:53.594692    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:41:58 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:41:58.595672    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:03 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:03.596736    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:08 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:08.598179    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:13 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:13.599447    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:18 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:18.601077    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:19 embed-certs-20211231102953-6736 kubelet[1285]: I1231 10:42:19.688270    1285 scope.go:110] "RemoveContainer" containerID="4e4e4a655756263037cc7f2afeca986a6c3db7a8f78dfeccf567f40db806da08"
	Dec 31 10:42:19 embed-certs-20211231102953-6736 kubelet[1285]: I1231 10:42:19.688652    1285 scope.go:110] "RemoveContainer" containerID="03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f"
	Dec 31 10:42:19 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:19.688959    1285 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-2gpsc_kube-system(1c5247f0-9b6e-4b7c-9325-0f80e9697124)\"" pod="kube-system/kindnet-2gpsc" podUID=1c5247f0-9b6e-4b7c-9325-0f80e9697124
	Dec 31 10:42:23 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:23.601888    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:28 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:28.602686    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:33 embed-certs-20211231102953-6736 kubelet[1285]: I1231 10:42:33.326136    1285 scope.go:110] "RemoveContainer" containerID="03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f"
	Dec 31 10:42:33 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:33.326577    1285 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-2gpsc_kube-system(1c5247f0-9b6e-4b7c-9325-0f80e9697124)\"" pod="kube-system/kindnet-2gpsc" podUID=1c5247f0-9b6e-4b7c-9325-0f80e9697124
	Dec 31 10:42:33 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:33.603534    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:38 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:38.604751    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:43 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:43.605867    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:46 embed-certs-20211231102953-6736 kubelet[1285]: I1231 10:42:46.326005    1285 scope.go:110] "RemoveContainer" containerID="03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f"
	Dec 31 10:42:46 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:46.326327    1285 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-2gpsc_kube-system(1c5247f0-9b6e-4b7c-9325-0f80e9697124)\"" pod="kube-system/kindnet-2gpsc" podUID=1c5247f0-9b6e-4b7c-9325-0f80e9697124
	Dec 31 10:42:48 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:48.607407    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:53 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:53.608494    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20211231102953-6736 -n embed-certs-20211231102953-6736
helpers_test.go:262: (dbg) Run:  kubectl --context embed-certs-20211231102953-6736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: busybox coredns-64897985d-65b6p storage-provisioner
helpers_test.go:273: ======> post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context embed-certs-20211231102953-6736 describe pod busybox coredns-64897985d-65b6p storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20211231102953-6736 describe pod busybox coredns-64897985d-65b6p storage-provisioner: exit status 1 (72.125009ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z6fqg (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-z6fqg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  50s (x8 over 8m3s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-65b6p" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context embed-certs-20211231102953-6736 describe pod busybox coredns-64897985d-65b6p storage-provisioner: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect embed-certs-20211231102953-6736
helpers_test.go:236: (dbg) docker inspect embed-certs-20211231102953-6736:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676",
	        "Created": "2021-12-31T10:30:07.254073431Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 220740,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-12-31T10:30:07.68623588Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/hostname",
	        "HostsPath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/hosts",
	        "LogPath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676-json.log",
	        "Name": "/embed-certs-20211231102953-6736",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-20211231102953-6736:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20211231102953-6736",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e-init/diff:/var/lib/docker/overlay2/c534a67c1f4c5b9f1d9e8fda7874029a0b48633454e02a0b6c810aa9cf70f36b/diff:/var/lib/docker/overlay2/de7a9fc119b7ade7c9f3b24bea5108d56a2268af67d230db0cccbbf8190cc502/diff:/var/lib/docker/overlay2/7dde3b10db81d42c3216201bd8676123e0e230bfa173b31e053af8fa206a800e/diff:/var/lib/docker/overlay2/97454ec819da74666dcc72f8e5b4653886c44f3699cecf13a018ecf95d935273/diff:/var/lib/docker/overlay2/f63f64cf22a9591cbcafeb284a265882ea890544f2b083b747047bcf47a45f11/diff:/var/lib/docker/overlay2/85a2123491b64d14f7493b3bc6a519d9df28cc3680976b9e5a7ecf783be5a304/diff:/var/lib/docker/overlay2/2ec15009b968007c3be12325edca4c12821b52fc616ace66d456038c7f9b8b26/diff:/var/lib/docker/overlay2/c45e9285ae7064da0be5f07d29b035ad54f64e529db00ffe6f2dce6ad543dccf/diff:/var/lib/docker/overlay2/d95000893ca52351a7804b110df43c1656b4bc7fbe4940c9e0ff2744233b1f93/diff:/var/lib/docker/overlay2/38e3bf
07bc49f88e1e7214644d31095a6178f5156c6a5ff3fbcaf8676336a52d/diff:/var/lib/docker/overlay2/fe45a9a5036ea98542c5fd7808e66f8b6ffe9d0c545de700f7cbe39d6280e0e3/diff:/var/lib/docker/overlay2/8cd90b043b30c66cbeacbe4ca21881d85717c6e16f1fe627e2eecdeaf461da6c/diff:/var/lib/docker/overlay2/7221b66a068ae65d9a1c089fe5c60e7230aa950b3d47eb7e90a1776c2a45931b/diff:/var/lib/docker/overlay2/86ce10ecfb90cfd8e1f7d722715353b823ddfa58b93eefa2c1efffff899916ee/diff:/var/lib/docker/overlay2/12ea1316fade88c6e1d2272d6dbeb629875ff96f4dd6f6ead0ff6a4e7e3a3067/diff:/var/lib/docker/overlay2/0fe276873166ac11302901f1bb2c547c71e23fac2197e717e789b6ce7d1eb3ce/diff:/var/lib/docker/overlay2/81bd863acb94ff576a1f1f046ee87693930a12209e915a09f2a720c77d1357f3/diff:/var/lib/docker/overlay2/8875d87537d51e137e2fbff41fe61dadeefa46057e0c93d257ec4754b5ba95f3/diff:/var/lib/docker/overlay2/86e75ae4c708ad2e78d21326ca3b7f552b7903d8c7a0a66e086feaacd3a8c002/diff:/var/lib/docker/overlay2/c688ba4459332409ab260296b96d566d186d307b81308fb3b59724fe6fed3c7d/diff:/var/lib/d
ocker/overlay2/45c4b946a887c9de4f592591b3e65041083baba75d5bed9b0995339de1cafd02/diff:/var/lib/docker/overlay2/3927af3e4dc5e4af94ac76cbe5a3b9759b80e10646aabc655f5c9b56ae13e650/diff:/var/lib/docker/overlay2/11dd3efafe062edcbd4dd837545d570514492786b08abf77f57fe300d225eaa5/diff:/var/lib/docker/overlay2/b8e0f4d5369eb0c633092d9e21501d99ccb27fafaad6667209e1d988c337ae9b/diff:/var/lib/docker/overlay2/c605476b22b48b266163ca6ee5fcb52b15458f1d7a77458390109b6e795f72f3/diff:/var/lib/docker/overlay2/19b31873fbc8858f6b143ea8dc98572f32110c70c29732285bdea89dd6163287/diff:/var/lib/docker/overlay2/171afcaaccc199dc3a653d1bf42453959e7c6bd7bc1e8340f94454b5f5fb6d2d/diff:/var/lib/docker/overlay2/2000b0fa8d671b6e34ec28228440eb7e6a5ddcd6b2cb657836d8522e3fb4454e/diff:/var/lib/docker/overlay2/1294bf56384681b45b611fe07d7b32c6c87a67eee568452d1f8123bf319f7a4c/diff:/var/lib/docker/overlay2/3199bb762e90968c1998250099f781c0e8954f4d62df39a80a384b9208149421/diff:/var/lib/docker/overlay2/036a54cd701c84ee783285b30f6cf33b92858f1a11ebd5db15cffffea12
adcd9/diff:/var/lib/docker/overlay2/552a166f9d1f0d79a7c45c08db35c6db9366e5d8e1321b62ffc3d8a9ea458797/diff:/var/lib/docker/overlay2/98f34e7af675026bd6dd9aaf43b996b5a992e04fe33f238d16d38e7ccad2fb91/diff:/var/lib/docker/overlay2/c1ed97ca4514bd37713c82daa6ba805285f12b5aa51599e9f915b8755fbf55a7/diff:/var/lib/docker/overlay2/86b8ba00550442fc509be298ff07205e99a171d56a70a2e1735cdfd4a9ab745e/diff:/var/lib/docker/overlay2/109092c1424dfee38a4eb63e39a9bb5d3de4b011f84e30bd021bf73ee2f47aa1/diff:/var/lib/docker/overlay2/87fd60ac9e5dd5a701def2c0a9e1c64463bd4779be06a990c86e5561a4349312/diff:/var/lib/docker/overlay2/145545da7fd578da8a41adec4cec3a9b347b4e609c900ee9365ae7735c7859c6/diff:/var/lib/docker/overlay2/e6a20d33de71269404b7fdc1fa6f915dcbc5a337e0625ef4536cb942bb55c19b/diff:/var/lib/docker/overlay2/38829872ccac03d0dae45b93d1f34e61fd179b6fc19f22fd27d4431ee773090f/diff:/var/lib/docker/overlay2/3c7fa7cc3994b36d6b984ebe7c46b535829f02814130486c4ecf5a0976f4f7dc/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20211231102953-6736",
	                "Source": "/var/lib/docker/volumes/embed-certs-20211231102953-6736/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20211231102953-6736",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20211231102953-6736",
	                "name.minikube.sigs.k8s.io": "embed-certs-20211231102953-6736",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e190deddeb5b1d7e9b4481ad93139648183971bf041d59445e4f831398786169",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49397"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49396"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49393"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49395"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49394"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e190deddeb5b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20211231102953-6736": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "de3bee7bab0c",
	                        "embed-certs-20211231102953-6736"
	                    ],
	                    "NetworkID": "821d0d66bcf3a6ca41969ece76bf8b556f86e66628fb90783541e59bdec0e994",
	                    "EndpointID": "493d10b1b399122713b7a745a90f22b6329f172b21c0ede79a67fa2664cc1302",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20211231102953-6736 -n embed-certs-20211231102953-6736
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20211231102953-6736 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-20211231102953-6736 logs -n 25: (1.110621551s)
helpers_test.go:253: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| pause   | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:22 UTC | Fri, 31 Dec 2021 10:32:23 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| -p      | enable-default-cni-20211231101406-6736                     | enable-default-cni-20211231101406-6736         | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:24 UTC | Fri, 31 Dec 2021 10:32:25 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| unpause | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:24 UTC | Fri, 31 Dec 2021 10:32:25 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| delete  | -p                                                         | enable-default-cni-20211231101406-6736         | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:26 UTC | Fri, 31 Dec 2021 10:32:29 UTC |
	|         | enable-default-cni-20211231101406-6736                     |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:26 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20211231103229-6736      | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:29 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | disable-driver-mounts-20211231103229-6736                  |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:30 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20211231103230-6736 --memory=2200            | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:30 UTC | Fri, 31 Dec 2021 10:33:37 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:37 UTC | Fri, 31 Dec 2021 10:33:38 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:38 UTC | Fri, 31 Dec 2021 10:33:58 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:58 UTC | Fri, 31 Dec 2021 10:33:58 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20211231103230-6736 --memory=2200            | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:58 UTC | Fri, 31 Dec 2021 10:34:51 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:52 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:53 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                            | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:53 UTC | Fri, 31 Dec 2021 10:34:54 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:54 UTC | Fri, 31 Dec 2021 10:34:55 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:56 UTC | Fri, 31 Dec 2021 10:34:59 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:59 UTC | Fri, 31 Dec 2021 10:35:00 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736             | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:37:37 UTC | Fri, 31 Dec 2021 10:37:38 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:02 UTC | Fri, 31 Dec 2021 10:39:03 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:05 UTC | Fri, 31 Dec 2021 10:39:06 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:06 UTC | Fri, 31 Dec 2021 10:39:07 UTC |
	|         | old-k8s-version-20211231102602-6736                        |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:07 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736                        |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:28 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736                        |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                            | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:57 UTC | Fri, 31 Dec 2021 10:42:58 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/31 10:39:28
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.17.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1231 10:39:28.297525  248388 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:39:28.297636  248388 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:39:28.297643  248388 out.go:310] Setting ErrFile to fd 2...
	I1231 10:39:28.297648  248388 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:39:28.297773  248388 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:39:28.298053  248388 out.go:304] Setting JSON to false
	I1231 10:39:28.299599  248388 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4923,"bootTime":1640942245,"procs":400,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:39:28.299706  248388 start.go:122] virtualization: kvm guest
	I1231 10:39:28.303369  248388 out.go:176] * [old-k8s-version-20211231102602-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:39:28.306213  248388 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:39:28.303616  248388 notify.go:174] Checking for updates...
	I1231 10:39:28.308369  248388 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:39:28.310826  248388 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:39:28.313682  248388 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:39:28.316049  248388 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:39:28.316742  248388 config.go:176] Loaded profile config "old-k8s-version-20211231102602-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I1231 10:39:28.320866  248388 out.go:176] * Kubernetes 1.23.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.1
	I1231 10:39:28.320932  248388 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:39:28.372088  248388 docker.go:132] docker version: linux-20.10.12
	I1231 10:39:28.372203  248388 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:39:28.488905  248388 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:39:28.412122441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:39:28.489039  248388 docker.go:237] overlay module found
	I1231 10:39:28.491975  248388 out.go:176] * Using the docker driver based on existing profile
	I1231 10:39:28.492011  248388 start.go:280] selected driver: docker
	I1231 10:39:28.492018  248388 start.go:795] validating driver "docker" against &{Name:old-k8s-version-20211231102602-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20211231102602-6736 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra
:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:39:28.492170  248388 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:39:28.492189  248388 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:39:28.492201  248388 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:39:28.492264  248388 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:39:28.492289  248388 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:39:28.494431  248388 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:39:28.495047  248388 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:39:28.596528  248388 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:39:28.528773732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	W1231 10:39:28.596686  248388 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:39:28.596711  248388 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:39:28.599384  248388 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:39:28.599534  248388 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1231 10:39:28.599569  248388 cni.go:93] Creating CNI manager for ""
	I1231 10:39:28.599580  248388 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:39:28.599601  248388 start_flags.go:298] config:
	{Name:old-k8s-version-20211231102602-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20211231102602-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Sched
uledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:39:28.602165  248388 out.go:176] * Starting control plane node old-k8s-version-20211231102602-6736 in cluster old-k8s-version-20211231102602-6736
	I1231 10:39:28.602214  248388 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:39:28.604479  248388 out.go:176] * Pulling base image ...
	I1231 10:39:28.604519  248388 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1231 10:39:28.604576  248388 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I1231 10:39:28.604590  248388 cache.go:57] Caching tarball of preloaded images
	I1231 10:39:28.604620  248388 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:39:28.604864  248388 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:39:28.604881  248388 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I1231 10:39:28.605028  248388 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/config.json ...
	I1231 10:39:28.648185  248388 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:39:28.648212  248388 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:39:28.648221  248388 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:39:28.648290  248388 start.go:313] acquiring machines lock for old-k8s-version-20211231102602-6736: {Name:mk363b8d877fe23a69d731c391a1b6f4ce841b33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:39:28.648398  248388 start.go:317] acquired machines lock for "old-k8s-version-20211231102602-6736" in 81.793µs
	I1231 10:39:28.648427  248388 start.go:93] Skipping create...Using existing machine configuration
	I1231 10:39:28.648436  248388 fix.go:55] fixHost starting: 
	I1231 10:39:28.648678  248388 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:39:28.687124  248388 fix.go:108] recreateIfNeeded on old-k8s-version-20211231102602-6736: state=Stopped err=<nil>
	W1231 10:39:28.687173  248388 fix.go:134] unexpected machine state, will restart: <nil>
	I1231 10:39:28.691855  248388 out.go:176] * Restarting existing docker container for "old-k8s-version-20211231102602-6736" ...
	I1231 10:39:28.691970  248388 cli_runner.go:133] Run: docker start old-k8s-version-20211231102602-6736
	I1231 10:39:29.129996  248388 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:39:29.173538  248388 kic.go:420] container "old-k8s-version-20211231102602-6736" state is running.
	I1231 10:39:29.174075  248388 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20211231102602-6736
	I1231 10:39:29.219092  248388 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/config.json ...
	I1231 10:39:29.219314  248388 machine.go:88] provisioning docker machine ...
	I1231 10:39:29.219347  248388 ubuntu.go:169] provisioning hostname "old-k8s-version-20211231102602-6736"
	I1231 10:39:29.219382  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:39:29.259417  248388 main.go:130] libmachine: Using SSH client type: native
	I1231 10:39:29.259602  248388 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49422 <nil> <nil>}
	I1231 10:39:29.259620  248388 main.go:130] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20211231102602-6736 && echo "old-k8s-version-20211231102602-6736" | sudo tee /etc/hostname
	I1231 10:39:29.260468  248388 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54766->127.0.0.1:49422: read: connection reset by peer
	I1231 10:39:32.408132  248388 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20211231102602-6736
	
	I1231 10:39:32.408224  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:39:32.452034  248388 main.go:130] libmachine: Using SSH client type: native
	I1231 10:39:32.452295  248388 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49422 <nil> <nil>}
	I1231 10:39:32.452329  248388 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20211231102602-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20211231102602-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20211231102602-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:39:32.592974  248388 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:39:32.593020  248388 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:39:32.593043  248388 ubuntu.go:177] setting up certificates
	I1231 10:39:32.593054  248388 provision.go:83] configureAuth start
	I1231 10:39:32.593097  248388 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20211231102602-6736
	I1231 10:39:32.631818  248388 provision.go:138] copyHostCerts
	I1231 10:39:32.631883  248388 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:39:32.631890  248388 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:39:32.631953  248388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:39:32.632060  248388 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:39:32.632071  248388 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:39:32.632110  248388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:39:32.632180  248388 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:39:32.632189  248388 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:39:32.632208  248388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:39:32.632302  248388 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20211231102602-6736 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20211231102602-6736]
	I1231 10:39:33.171522  248388 provision.go:172] copyRemoteCerts
	I1231 10:39:33.171593  248388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:39:33.171626  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:39:33.215114  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:39:33.313197  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:39:33.336105  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I1231 10:39:33.356635  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1231 10:39:33.379375  248388 provision.go:86] duration metric: configureAuth took 786.294314ms
	I1231 10:39:33.379494  248388 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:39:33.379778  248388 config.go:176] Loaded profile config "old-k8s-version-20211231102602-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I1231 10:39:33.379801  248388 machine.go:91] provisioned docker machine in 4.160462173s
	I1231 10:39:33.379812  248388 start.go:267] post-start starting for "old-k8s-version-20211231102602-6736" (driver="docker")
	I1231 10:39:33.379817  248388 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:39:33.379857  248388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:39:33.379894  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:39:33.419775  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:39:33.517404  248388 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:39:33.521457  248388 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:39:33.521489  248388 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:39:33.521498  248388 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:39:33.521503  248388 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:39:33.521516  248388 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:39:33.521566  248388 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:39:33.521628  248388 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:39:33.521709  248388 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:39:33.529871  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:39:33.549872  248388 start.go:270] post-start completed in 170.044596ms
	I1231 10:39:33.549940  248388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:39:33.549978  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:39:33.589440  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:39:33.685438  248388 fix.go:57] fixHost completed within 5.036996865s
	I1231 10:39:33.685483  248388 start.go:80] releasing machines lock for "old-k8s-version-20211231102602-6736", held for 5.037064541s
	I1231 10:39:33.685596  248388 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20211231102602-6736
	I1231 10:39:33.725054  248388 ssh_runner.go:195] Run: systemctl --version
	I1231 10:39:33.725110  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:39:33.725151  248388 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:39:33.725205  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:39:33.764466  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:39:33.765016  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:39:33.861869  248388 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:39:33.891797  248388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:39:33.904477  248388 docker.go:158] disabling docker service ...
	I1231 10:39:33.904532  248388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:39:33.916366  248388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:39:33.927743  248388 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:39:34.013024  248388 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:39:34.091153  248388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:39:34.102021  248388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:39:34.116459  248388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:39:34.131061  248388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:39:34.138871  248388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:39:34.147361  248388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:39:34.228786  248388 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:39:34.310283  248388 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:39:34.310366  248388 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:39:34.316663  248388 start.go:458] Will wait 60s for crictl version
	I1231 10:39:34.316739  248388 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:39:34.347621  248388 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:39:34Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:39:45.394578  248388 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:39:45.422381  248388 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:39:45.422454  248388 ssh_runner.go:195] Run: containerd --version
	I1231 10:39:45.445888  248388 ssh_runner.go:195] Run: containerd --version
	I1231 10:39:45.473645  248388 out.go:176] * Preparing Kubernetes v1.16.0 on containerd 1.4.12 ...
	I1231 10:39:45.473739  248388 cli_runner.go:133] Run: docker network inspect old-k8s-version-20211231102602-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:39:45.512179  248388 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1231 10:39:45.516345  248388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:39:45.530412  248388 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:39:45.532672  248388 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:39:45.534826  248388 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:39:45.534904  248388 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1231 10:39:45.534972  248388 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:39:45.562331  248388 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:39:45.562363  248388 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:39:45.562405  248388 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:39:45.588904  248388 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:39:45.588927  248388 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:39:45.588971  248388 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:39:45.617144  248388 cni.go:93] Creating CNI manager for ""
	I1231 10:39:45.617169  248388 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:39:45.617187  248388 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:39:45.617200  248388 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20211231102602-6736 NodeName:old-k8s-version-20211231102602-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:39:45.617337  248388 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-20211231102602-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20211231102602-6736
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.49.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:39:45.617416  248388 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=old-k8s-version-20211231102602-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20211231102602-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1231 10:39:45.617471  248388 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1231 10:39:45.626756  248388 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:39:45.626830  248388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:39:45.634969  248388 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (642 bytes)
	I1231 10:39:45.650571  248388 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1231 10:39:45.667126  248388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I1231 10:39:45.684267  248388 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:39:45.687751  248388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:39:45.698181  248388 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736 for IP: 192.168.49.2
	I1231 10:39:45.698295  248388 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:39:45.698331  248388 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:39:45.698394  248388 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.key
	I1231 10:39:45.698446  248388 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/apiserver.key.dd3b5fb2
	I1231 10:39:45.698482  248388 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/proxy-client.key
	I1231 10:39:45.698570  248388 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:39:45.698600  248388 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:39:45.698611  248388 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:39:45.698633  248388 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:39:45.698653  248388 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:39:45.698673  248388 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:39:45.698710  248388 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:39:45.699579  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:39:45.721393  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1231 10:39:45.741875  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:39:45.762168  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1231 10:39:45.782716  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:39:45.803141  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:39:45.824081  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:39:45.844631  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:39:45.865581  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:39:45.888196  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:39:45.911552  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:39:45.932076  248388 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:39:45.947761  248388 ssh_runner.go:195] Run: openssl version
	I1231 10:39:45.953604  248388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:39:45.964111  248388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:39:45.968075  248388 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:39:45.968153  248388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:39:45.974160  248388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:39:45.983159  248388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:39:45.992274  248388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:39:45.996413  248388 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:39:45.996467  248388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:39:46.002116  248388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:39:46.010648  248388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:39:46.019736  248388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:39:46.024515  248388 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:39:46.024587  248388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:39:46.030635  248388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:39:46.039396  248388 kubeadm.go:388] StartCluster: {Name:old-k8s-version-20211231102602-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20211231102602-6736 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true n
ode_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:39:46.039522  248388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:39:46.039635  248388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:39:46.068660  248388 cri.go:87] found id: "9711ffb10b897aecfb60ff957702f91f87cdd75e0701725914ee129e8c6799cb"
	I1231 10:39:46.068685  248388 cri.go:87] found id: "91f9570ac59622f25f47f9d8bf9cfa1ecd2c14cb7722777629a959e7f187d512"
	I1231 10:39:46.068691  248388 cri.go:87] found id: "090a101afa0e5be4c178038538c1438ae269f1339bb853fc4beb2973fd8f69c6"
	I1231 10:39:46.068695  248388 cri.go:87] found id: "a0fea282c2cab22ac98fca2724b604a81ad02188a7148a610d923ce9704541fe"
	I1231 10:39:46.068699  248388 cri.go:87] found id: "fddc6f96e1ab6aff7257a3f3e9e946ae7b0d808bbca6e09ffc2653e63aa5c9e4"
	I1231 10:39:46.068704  248388 cri.go:87] found id: "c5161903fa79820ba4aac6aae4e2aa2335944ccae08a80bec50f7a09bcb290a0"
	I1231 10:39:46.068708  248388 cri.go:87] found id: ""
	I1231 10:39:46.068754  248388 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1231 10:39:46.086307  248388 cri.go:114] JSON = null
	W1231 10:39:46.086363  248388 kubeadm.go:395] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I1231 10:39:46.086418  248388 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:39:46.094669  248388 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1231 10:39:46.102668  248388 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:46.103765  248388 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20211231102602-6736" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:39:46.104304  248388 kubeconfig.go:127] "old-k8s-version-20211231102602-6736" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig - will repair!
	I1231 10:39:46.105141  248388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:39:46.107623  248388 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1231 10:39:46.116159  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:46.116212  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:46.133979  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:46.334456  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:46.334533  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:46.350538  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:46.534837  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:46.534919  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:46.550816  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:46.735119  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:46.735239  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:46.753056  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:46.934288  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:46.934363  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:46.951681  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:47.135023  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:47.135101  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:47.152550  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:47.335029  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:47.335115  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:47.351854  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:47.534112  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:47.534201  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:47.550587  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:47.734908  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:47.734983  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:47.750831  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:47.934079  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:47.934168  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:47.951066  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:48.134297  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:48.134366  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:48.149419  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:48.334062  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:48.334157  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:48.350555  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:48.534878  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:48.534992  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:48.551106  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:48.734409  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:48.734487  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:48.751378  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:48.934729  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:48.934886  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:48.952549  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:49.134824  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:49.134924  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:49.150693  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:49.150726  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:49.150765  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:49.166744  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W1231 10:39:49.166812  248388 kubeadm.go:600] needs reconfigure: apiserver error: timed out waiting for the condition
	I1231 10:39:49.166837  248388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1231 10:39:49.915167  248388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:39:49.927449  248388 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:39:49.935702  248388 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:39:49.935784  248388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:39:49.945133  248388 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:39:49.945201  248388 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:40:01.197512  248388 out.go:203]   - Generating certificates and keys ...
	I1231 10:40:01.199961  248388 out.go:203]   - Booting up control plane ...
	I1231 10:40:01.203074  248388 out.go:203]   - Configuring RBAC rules ...
	I1231 10:40:01.206183  248388 cni.go:93] Creating CNI manager for ""
	I1231 10:40:01.206224  248388 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:40:01.208451  248388 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:40:01.208540  248388 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:40:01.212803  248388 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I1231 10:40:01.212831  248388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:40:01.227527  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:40:01.503411  248388 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:40:01.503539  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=old-k8s-version-20211231102602-6736 minikube.k8s.io/updated_at=2021_12_31T10_40_01_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:01.503559  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:01.617474  248388 ops.go:34] apiserver oom_adj: -16
	I1231 10:40:01.617597  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:02.286294  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:02.785910  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:03.285879  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:03.786120  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:04.286213  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:04.785628  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:05.286526  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:05.785916  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:06.285927  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:06.786520  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:07.286634  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:07.785622  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:08.285621  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:08.785980  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:09.285657  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:09.785778  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:10.286662  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:10.786025  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:11.286281  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:11.786574  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:12.285800  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:12.786532  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:13.286195  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:13.785848  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:14.286306  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:14.785834  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:15.286191  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:15.786611  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:16.285813  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:16.786057  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:17.183895  248388 kubeadm.go:864] duration metric: took 15.680328365s to wait for elevateKubeSystemPrivileges.
	I1231 10:40:17.184024  248388 kubeadm.go:390] StartCluster complete in 31.144629299s
	I1231 10:40:17.184054  248388 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:40:17.184188  248388 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:40:17.186479  248388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:40:17.705951  248388 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20211231102602-6736" rescaled to 1
	I1231 10:40:17.706014  248388 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ControlPlane:true Worker:true}
	I1231 10:40:17.708934  248388 out.go:176] * Verifying Kubernetes components...
	I1231 10:40:17.706076  248388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:40:17.709017  248388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:40:17.706087  248388 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I1231 10:40:17.709130  248388 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20211231102602-6736"
	I1231 10:40:17.709154  248388 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20211231102602-6736"
	W1231 10:40:17.709171  248388 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:40:17.709180  248388 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20211231102602-6736"
	I1231 10:40:17.709197  248388 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-20211231102602-6736"
	I1231 10:40:17.709204  248388 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20211231102602-6736"
	I1231 10:40:17.709207  248388 host.go:66] Checking if "old-k8s-version-20211231102602-6736" exists ...
	I1231 10:40:17.709214  248388 addons.go:153] Setting addon metrics-server=true in "old-k8s-version-20211231102602-6736"
	W1231 10:40:17.709224  248388 addons.go:165] addon metrics-server should already be in state true
	I1231 10:40:17.709252  248388 host.go:66] Checking if "old-k8s-version-20211231102602-6736" exists ...
	I1231 10:40:17.709546  248388 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:40:17.709679  248388 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:40:17.709707  248388 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:40:17.709180  248388 addons.go:65] Setting dashboard=true in profile "old-k8s-version-20211231102602-6736"
	I1231 10:40:17.709815  248388 addons.go:153] Setting addon dashboard=true in "old-k8s-version-20211231102602-6736"
	W1231 10:40:17.709831  248388 addons.go:165] addon dashboard should already be in state true
	I1231 10:40:17.709863  248388 host.go:66] Checking if "old-k8s-version-20211231102602-6736" exists ...
	I1231 10:40:17.706302  248388 config.go:176] Loaded profile config "old-k8s-version-20211231102602-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I1231 10:40:17.710310  248388 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:40:17.779917  248388 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:40:17.782452  248388 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1231 10:40:17.780138  248388 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:40:17.782516  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:40:17.782593  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:40:17.785955  248388 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1231 10:40:17.786036  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1231 10:40:17.786046  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1231 10:40:17.786103  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:40:17.786394  248388 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20211231102602-6736"
	W1231 10:40:17.786421  248388 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:40:17.786450  248388 host.go:66] Checking if "old-k8s-version-20211231102602-6736" exists ...
	I1231 10:40:17.786855  248388 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:40:17.791798  248388 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1231 10:40:17.791923  248388 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1231 10:40:17.791936  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1231 10:40:17.792036  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:40:17.849271  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:40:17.850285  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:40:17.858679  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:40:17.864118  248388 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:40:17.864145  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:40:17.864187  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:40:17.908918  248388 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20211231102602-6736" to be "Ready" ...
	I1231 10:40:17.909113  248388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:40:17.911266  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:40:18.000420  248388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:40:18.193571  248388 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1231 10:40:18.193674  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1231 10:40:18.199990  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1231 10:40:18.200020  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1231 10:40:18.285365  248388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:40:18.380538  248388 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1231 10:40:18.380663  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1231 10:40:18.381915  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1231 10:40:18.381967  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1231 10:40:18.479807  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1231 10:40:18.479844  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1231 10:40:18.483146  248388 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:40:18.483186  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1231 10:40:18.506042  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1231 10:40:18.506075  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1231 10:40:18.507973  248388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:40:18.587116  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1231 10:40:18.587147  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1231 10:40:18.608838  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1231 10:40:18.608870  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1231 10:40:18.702834  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1231 10:40:18.702926  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1231 10:40:18.781339  248388 start.go:773] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I1231 10:40:18.799607  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1231 10:40:18.799644  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1231 10:40:18.889031  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:40:18.889104  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1231 10:40:18.912177  248388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:40:19.001324  248388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.00085472s)
	I1231 10:40:19.487581  248388 addons.go:386] Verifying addon metrics-server=true in "old-k8s-version-20211231102602-6736"
	I1231 10:40:19.890512  248388 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1231 10:40:19.890555  248388 addons.go:417] enableAddons completed in 2.184473142s
	I1231 10:40:19.918068  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:22.417255  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:24.417743  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:26.916948  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:28.917781  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:31.417404  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:33.917310  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:36.417381  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:38.417560  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:40.916598  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:42.916916  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:44.917164  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:47.416836  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:49.916761  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:51.917072  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:53.917106  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:55.918552  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:58.418103  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:00.917229  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:03.417718  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:05.417993  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:07.916832  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:09.917062  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:11.917659  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:14.416640  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:16.417274  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:18.417736  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:20.916549  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:22.917390  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:24.917495  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:27.416717  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:29.920179  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:32.416641  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:34.917102  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:36.917265  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:39.417341  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:41.917725  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:44.417286  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:46.916626  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:48.917674  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:51.417587  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:53.916892  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:55.917634  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:58.417650  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:00.916883  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:02.917451  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:04.917545  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:07.416653  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:09.417665  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:11.916571  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:13.917242  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:16.417113  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:18.417745  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:20.916702  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:22.917430  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:25.417573  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:27.417667  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:29.918025  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:32.417484  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:34.417793  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:36.917449  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:38.917872  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:41.419241  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:43.918100  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:46.417949  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:48.917157  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:50.917643  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:53.417372  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:55.418335  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:57.917329  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	03de7bcc0efa5       6de166512aa22       3 minutes ago       Exited              kindnet-cni               3                   44dd51a088ce3
	a380c0d98153c       b46c42588d511       12 minutes ago      Running             kube-proxy                0                   835b47ba02211
	7de9215e7da17       25f8c7f3da61c       12 minutes ago      Running             etcd                      0                   e2c8947bfc291
	bd3c847642a9f       f51846a4fd288       12 minutes ago      Running             kube-controller-manager   0                   ea2c5e2434c34
	4d064efc3679b       71d575efe6283       12 minutes ago      Running             kube-scheduler            0                   c1141691949f5
	eb97b3087125d       b6d7abedde399       12 minutes ago      Running             kube-apiserver            0                   0d2541a270208
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-12-31 10:30:08 UTC, end at Fri 2021-12-31 10:43:00 UTC. --
	Dec 31 10:36:14 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:36:14.333839986Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:36:15 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:36:15.016200795Z" level=info msg="RemoveContainer for \"a0dcfaded27988532fd2f6cf67643d7ef3aa93cb98da8884fe6d6e539438d20a\""
	Dec 31 10:36:15 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:36:15.026407893Z" level=info msg="RemoveContainer for \"a0dcfaded27988532fd2f6cf67643d7ef3aa93cb98da8884fe6d6e539438d20a\" returns successfully"
	Dec 31 10:36:28 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:36:28.329678822Z" level=info msg="CreateContainer within sandbox \"44dd51a088ce33bf525525cb8330710d2c26acdbd4f05e817316f4c87dd3a31a\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Dec 31 10:36:28 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:36:28.355748830Z" level=info msg="CreateContainer within sandbox \"44dd51a088ce33bf525525cb8330710d2c26acdbd4f05e817316f4c87dd3a31a\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"4e4e4a655756263037cc7f2afeca986a6c3db7a8f78dfeccf567f40db806da08\""
	Dec 31 10:36:28 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:36:28.356461697Z" level=info msg="StartContainer for \"4e4e4a655756263037cc7f2afeca986a6c3db7a8f78dfeccf567f40db806da08\""
	Dec 31 10:36:28 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:36:28.608833259Z" level=info msg="StartContainer for \"4e4e4a655756263037cc7f2afeca986a6c3db7a8f78dfeccf567f40db806da08\" returns successfully"
	Dec 31 10:39:08 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:39:08.895668234Z" level=info msg="Finish piping stdout of container \"4e4e4a655756263037cc7f2afeca986a6c3db7a8f78dfeccf567f40db806da08\""
	Dec 31 10:39:08 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:39:08.895725113Z" level=info msg="Finish piping stderr of container \"4e4e4a655756263037cc7f2afeca986a6c3db7a8f78dfeccf567f40db806da08\""
	Dec 31 10:39:08 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:39:08.896740590Z" level=info msg="TaskExit event &TaskExit{ContainerID:4e4e4a655756263037cc7f2afeca986a6c3db7a8f78dfeccf567f40db806da08,ID:4e4e4a655756263037cc7f2afeca986a6c3db7a8f78dfeccf567f40db806da08,Pid:2431,ExitStatus:2,ExitedAt:2021-12-31 10:39:08.896124289 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:39:08 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:39:08.928871440Z" level=info msg="shim disconnected" id=4e4e4a655756263037cc7f2afeca986a6c3db7a8f78dfeccf567f40db806da08
	Dec 31 10:39:08 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:39:08.928974198Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:39:09 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:39:09.339925263Z" level=info msg="RemoveContainer for \"7ea8e6db9c1e87be1aa50758a859dff16936dc01f47b666335efb3ef53644353\""
	Dec 31 10:39:09 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:39:09.347839888Z" level=info msg="RemoveContainer for \"7ea8e6db9c1e87be1aa50758a859dff16936dc01f47b666335efb3ef53644353\" returns successfully"
	Dec 31 10:39:38 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:39:38.329032821Z" level=info msg="CreateContainer within sandbox \"44dd51a088ce33bf525525cb8330710d2c26acdbd4f05e817316f4c87dd3a31a\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Dec 31 10:39:38 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:39:38.369738498Z" level=info msg="CreateContainer within sandbox \"44dd51a088ce33bf525525cb8330710d2c26acdbd4f05e817316f4c87dd3a31a\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f\""
	Dec 31 10:39:38 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:39:38.370325382Z" level=info msg="StartContainer for \"03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f\""
	Dec 31 10:39:38 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:39:38.585144696Z" level=info msg="StartContainer for \"03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f\" returns successfully"
	Dec 31 10:42:18 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:42:18.886900757Z" level=info msg="Finish piping stdout of container \"03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f\""
	Dec 31 10:42:18 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:42:18.886910043Z" level=info msg="Finish piping stderr of container \"03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f\""
	Dec 31 10:42:18 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:42:18.887710154Z" level=info msg="TaskExit event &TaskExit{ContainerID:03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f,ID:03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f,Pid:2522,ExitStatus:2,ExitedAt:2021-12-31 10:42:18.88746567 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:42:18 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:42:18.914834998Z" level=info msg="shim disconnected" id=03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f
	Dec 31 10:42:18 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:42:18.915125017Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:42:19 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:42:19.689443660Z" level=info msg="RemoveContainer for \"4e4e4a655756263037cc7f2afeca986a6c3db7a8f78dfeccf567f40db806da08\""
	Dec 31 10:42:19 embed-certs-20211231102953-6736 containerd[466]: time="2021-12-31T10:42:19.695366956Z" level=info msg="RemoveContainer for \"4e4e4a655756263037cc7f2afeca986a6c3db7a8f78dfeccf567f40db806da08\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20211231102953-6736
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20211231102953-6736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb
	                    minikube.k8s.io/name=embed-certs-20211231102953-6736
	                    minikube.k8s.io/updated_at=2021_12_31T10_30_39_0700
	                    minikube.k8s.io/version=v1.24.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 Dec 2021 10:30:35 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20211231102953-6736
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 Dec 2021 10:42:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 Dec 2021 10:41:05 +0000   Fri, 31 Dec 2021 10:30:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 Dec 2021 10:41:05 +0000   Fri, 31 Dec 2021 10:30:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 Dec 2021 10:41:05 +0000   Fri, 31 Dec 2021 10:30:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 31 Dec 2021 10:41:05 +0000   Fri, 31 Dec 2021 10:30:32 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20211231102953-6736
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	  System UUID:                df6948c7-cd35-4573-a0b7-f7c0ae501659
	  Boot ID:                    e6f4bca3-a61e-4e63-a68d-0c73a5c2c999
	  Kernel Version:             5.11.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.12
	  Kubelet Version:            v1.23.1
	  Kube-Proxy Version:         v1.23.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20211231102953-6736                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-2gpsc                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-embed-certs-20211231102953-6736             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-embed-certs-20211231102953-6736    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-jfhh7                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-embed-certs-20211231102953-6736             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 12m                kube-proxy  
	  Normal  Starting                 12m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x3 over 12m)  kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x3 over 12m)  kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x2 over 12m)  kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 da bc c6 d3 d8 08 06
	[  +0.905080] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vetha337dd59
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 d5 c5 d8 9b b7 08 06
	[  +0.767220] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethad0df3e8
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 45 74 46 b2 ed 08 06
	[  +0.175235] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethfa30cc3c
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 f5 8d 27 39 52 08 06
	[  +2.267937] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.022406] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.024047] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.951751] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.011878] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023951] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.963808] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.003832] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023918] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	
	* 
	* ==> etcd [7de9215e7da17128bb40a41f2f520ecf92f3eb1e98a7c3510a228471a97c4f2f] <==
	* {"level":"info","ts":"2021-12-31T10:30:32.208Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2021-12-31T10:30:32.208Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:embed-certs-20211231102953-6736 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2021-12-31T10:30:32.209Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:30:32.209Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:30:32.209Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-12-31T10:30:32.209Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-12-31T10:30:32.209Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:30:32.210Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:30:32.210Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:30:32.210Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:30:32.210Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2021-12-31T10:30:32.211Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2021-12-31T10:32:36.853Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"102.318119ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3238508214417130916 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.58.2\" mod_revision:509 > success:<request_put:<key:\"/registry/masterleases/192.168.58.2\" value_size:67 lease:3238508214417130914 >> failure:<request_range:<key:\"/registry/masterleases/192.168.58.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2021-12-31T10:32:36.853Z","caller":"traceutil/trace.go:171","msg":"trace[1858985268] transaction","detail":"{read_only:false; response_revision:511; number_of_response:1; }","duration":"156.695597ms","start":"2021-12-31T10:32:36.696Z","end":"2021-12-31T10:32:36.853Z","steps":["trace[1858985268] 'process raft request'  (duration: 53.739286ms)","trace[1858985268] 'compare'  (duration: 102.215258ms)"],"step_count":2}
	{"level":"warn","ts":"2021-12-31T10:32:37.266Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"133.925799ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-20211231102953-6736\" ","response":"range_response_count:1 size:4942"}
	{"level":"info","ts":"2021-12-31T10:32:37.266Z","caller":"traceutil/trace.go:171","msg":"trace[619865893] range","detail":"{range_begin:/registry/minions/embed-certs-20211231102953-6736; range_end:; response_count:1; response_revision:511; }","duration":"134.035118ms","start":"2021-12-31T10:32:37.132Z","end":"2021-12-31T10:32:37.266Z","steps":["trace[619865893] 'range keys from in-memory index tree'  (duration: 133.800618ms)"],"step_count":1}
	{"level":"warn","ts":"2021-12-31T10:32:46.740Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"108.769669ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-20211231102953-6736\" ","response":"range_response_count:1 size:4942"}
	{"level":"info","ts":"2021-12-31T10:32:46.740Z","caller":"traceutil/trace.go:171","msg":"trace[1529683504] range","detail":"{range_begin:/registry/minions/embed-certs-20211231102953-6736; range_end:; response_count:1; response_revision:512; }","duration":"108.867526ms","start":"2021-12-31T10:32:46.632Z","end":"2021-12-31T10:32:46.740Z","steps":["trace[1529683504] 'range keys from in-memory index tree'  (duration: 108.625105ms)"],"step_count":1}
	{"level":"warn","ts":"2021-12-31T10:32:47.049Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"151.307928ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3238508214417130964 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.58.2\" mod_revision:511 > success:<request_put:<key:\"/registry/masterleases/192.168.58.2\" value_size:67 lease:3238508214417130962 >> failure:<request_range:<key:\"/registry/masterleases/192.168.58.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2021-12-31T10:32:47.049Z","caller":"traceutil/trace.go:171","msg":"trace[1806958756] transaction","detail":"{read_only:false; response_revision:513; number_of_response:1; }","duration":"302.801998ms","start":"2021-12-31T10:32:46.746Z","end":"2021-12-31T10:32:47.049Z","steps":["trace[1806958756] 'process raft request'  (duration: 151.330883ms)","trace[1806958756] 'compare'  (duration: 151.186541ms)"],"step_count":2}
	{"level":"warn","ts":"2021-12-31T10:32:47.049Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2021-12-31T10:32:46.746Z","time spent":"302.874046ms","remote":"127.0.0.1:48400","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.58.2\" mod_revision:511 > success:<request_put:<key:\"/registry/masterleases/192.168.58.2\" value_size:67 lease:3238508214417130962 >> failure:<request_range:<key:\"/registry/masterleases/192.168.58.2\" > >"}
	{"level":"warn","ts":"2021-12-31T10:32:47.395Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"217.090228ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2021-12-31T10:32:47.395Z","caller":"traceutil/trace.go:171","msg":"trace[768014922] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:513; }","duration":"217.205911ms","start":"2021-12-31T10:32:47.178Z","end":"2021-12-31T10:32:47.395Z","steps":["trace[768014922] 'range keys from in-memory index tree'  (duration: 216.99865ms)"],"step_count":1}
	{"level":"info","ts":"2021-12-31T10:40:32.728Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":562}
	{"level":"info","ts":"2021-12-31T10:40:32.729Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":562,"took":"586.656µs"}
	
	* 
	* ==> kernel <==
	*  10:43:00 up  1:25,  0 users,  load average: 1.09, 1.06, 1.84
	Linux embed-certs-20211231102953-6736 5.11.0-1023-gcp #25~20.04.1-Ubuntu SMP Mon Nov 15 15:54:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [eb97b3087125dceba3df5197317e35791b4ca2f794effdfa4d5118543d9d3072] <==
	* I1231 10:30:35.080871       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1231 10:30:35.086357       1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1231 10:30:35.089831       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I1231 10:30:35.105027       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I1231 10:30:35.185467       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I1231 10:30:35.190675       1 controller.go:611] quota admission added evaluator for: namespaces
	I1231 10:30:35.962970       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1231 10:30:35.963003       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1231 10:30:35.970498       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I1231 10:30:35.974237       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I1231 10:30:35.974274       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I1231 10:30:36.585265       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1231 10:30:36.625041       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1231 10:30:36.698601       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1231 10:30:36.708729       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1231 10:30:36.709846       1 controller.go:611] quota admission added evaluator for: endpoints
	I1231 10:30:36.714558       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1231 10:30:37.200677       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1231 10:30:38.041816       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1231 10:30:38.051264       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1231 10:30:38.081752       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1231 10:30:43.185304       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1231 10:30:51.624551       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1231 10:30:51.823628       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1231 10:30:52.493305       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [bd3c847642a9fe6d581ed824b26b0b73d8344e03d63122f560cf62e61a262cb3] <==
	* I1231 10:30:51.142552       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I1231 10:30:51.143335       1 event.go:294] "Event occurred" object="embed-certs-20211231102953-6736" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node embed-certs-20211231102953-6736 event: Registered Node embed-certs-20211231102953-6736 in Controller"
	I1231 10:30:51.147804       1 range_allocator.go:374] Set node embed-certs-20211231102953-6736 PodCIDR to [10.244.0.0/24]
	I1231 10:30:51.153429       1 event.go:294] "Event occurred" object="kube-system/kube-scheduler-embed-certs-20211231102953-6736" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1231 10:30:51.153457       1 event.go:294] "Event occurred" object="kube-system/etcd-embed-certs-20211231102953-6736" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1231 10:30:51.154844       1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-embed-certs-20211231102953-6736" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1231 10:30:51.158578       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager-embed-certs-20211231102953-6736" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1231 10:30:51.179056       1 shared_informer.go:247] Caches are synced for service account 
	I1231 10:30:51.199625       1 shared_informer.go:247] Caches are synced for attach detach 
	I1231 10:30:51.218729       1 shared_informer.go:247] Caches are synced for disruption 
	I1231 10:30:51.218771       1 disruption.go:371] Sending events to api server.
	I1231 10:30:51.219917       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I1231 10:30:51.271457       1 shared_informer.go:247] Caches are synced for cronjob 
	I1231 10:30:51.324558       1 shared_informer.go:247] Caches are synced for resource quota 
	I1231 10:30:51.331860       1 shared_informer.go:247] Caches are synced for resource quota 
	I1231 10:30:51.635933       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jfhh7"
	I1231 10:30:51.641599       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2gpsc"
	I1231 10:30:51.747935       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1231 10:30:51.792036       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1231 10:30:51.792069       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1231 10:30:51.830856       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I1231 10:30:52.031176       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-8fwps"
	I1231 10:30:52.080996       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-65b6p"
	I1231 10:30:52.526990       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I1231 10:30:52.594186       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-8fwps"
	
	* 
	* ==> kube-proxy [a380c0d98153cc2ab6095cc8b6eaf0192543775721df6c9cb6c5e4e510d1636d] <==
	* I1231 10:30:52.397068       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I1231 10:30:52.397149       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I1231 10:30:52.397183       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1231 10:30:52.487490       1 server_others.go:206] "Using iptables Proxier"
	I1231 10:30:52.487554       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1231 10:30:52.487568       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1231 10:30:52.487602       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1231 10:30:52.488071       1 server.go:656] "Version info" version="v1.23.1"
	I1231 10:30:52.489621       1 config.go:226] "Starting endpoint slice config controller"
	I1231 10:30:52.489639       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1231 10:30:52.489749       1 config.go:317] "Starting service config controller"
	I1231 10:30:52.489756       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1231 10:30:52.590564       1 shared_informer.go:247] Caches are synced for service config 
	I1231 10:30:52.590627       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [4d064efc3679beff93c0b83a48f6fdc82cb98e282e856beb780502ca855801a3] <==
	* E1231 10:30:35.193161       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1231 10:30:35.193184       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:30:35.193230       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1231 10:30:35.193236       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1231 10:30:35.196995       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1231 10:30:35.193715       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1231 10:30:35.197155       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1231 10:30:35.999536       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1231 10:30:35.999579       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1231 10:30:36.135168       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1231 10:30:36.135211       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1231 10:30:36.140141       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1231 10:30:36.140211       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1231 10:30:36.180265       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:30:36.180311       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1231 10:30:36.180317       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1231 10:30:36.180351       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1231 10:30:36.186253       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:30:36.186304       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1231 10:30:36.323948       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1231 10:30:36.323983       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1231 10:30:36.323990       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:30:36.324013       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1231 10:30:36.802803       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1231 10:30:37.896209       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-12-31 10:30:08 UTC, end at Fri 2021-12-31 10:43:00 UTC. --
	Dec 31 10:41:33 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:41:33.590137    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:41:38 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:41:38.591498    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:41:43 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:41:43.592645    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:41:48 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:41:48.593367    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:41:53 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:41:53.594692    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:41:58 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:41:58.595672    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:03 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:03.596736    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:08 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:08.598179    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:13 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:13.599447    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:18 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:18.601077    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:19 embed-certs-20211231102953-6736 kubelet[1285]: I1231 10:42:19.688270    1285 scope.go:110] "RemoveContainer" containerID="4e4e4a655756263037cc7f2afeca986a6c3db7a8f78dfeccf567f40db806da08"
	Dec 31 10:42:19 embed-certs-20211231102953-6736 kubelet[1285]: I1231 10:42:19.688652    1285 scope.go:110] "RemoveContainer" containerID="03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f"
	Dec 31 10:42:19 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:19.688959    1285 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-2gpsc_kube-system(1c5247f0-9b6e-4b7c-9325-0f80e9697124)\"" pod="kube-system/kindnet-2gpsc" podUID=1c5247f0-9b6e-4b7c-9325-0f80e9697124
	Dec 31 10:42:23 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:23.601888    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:28 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:28.602686    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:33 embed-certs-20211231102953-6736 kubelet[1285]: I1231 10:42:33.326136    1285 scope.go:110] "RemoveContainer" containerID="03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f"
	Dec 31 10:42:33 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:33.326577    1285 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-2gpsc_kube-system(1c5247f0-9b6e-4b7c-9325-0f80e9697124)\"" pod="kube-system/kindnet-2gpsc" podUID=1c5247f0-9b6e-4b7c-9325-0f80e9697124
	Dec 31 10:42:33 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:33.603534    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:38 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:38.604751    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:43 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:43.605867    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:46 embed-certs-20211231102953-6736 kubelet[1285]: I1231 10:42:46.326005    1285 scope.go:110] "RemoveContainer" containerID="03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f"
	Dec 31 10:42:46 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:46.326327    1285 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-2gpsc_kube-system(1c5247f0-9b6e-4b7c-9325-0f80e9697124)\"" pod="kube-system/kindnet-2gpsc" podUID=1c5247f0-9b6e-4b7c-9325-0f80e9697124
	Dec 31 10:42:48 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:48.607407    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:53 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:53.608494    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:42:58 embed-certs-20211231102953-6736 kubelet[1285]: E1231 10:42:58.610565    1285 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20211231102953-6736 -n embed-certs-20211231102953-6736
helpers_test.go:262: (dbg) Run:  kubectl --context embed-certs-20211231102953-6736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: busybox coredns-64897985d-65b6p storage-provisioner
helpers_test.go:273: ======> post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context embed-certs-20211231102953-6736 describe pod busybox coredns-64897985d-65b6p storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20211231102953-6736 describe pod busybox coredns-64897985d-65b6p storage-provisioner: exit status 1 (67.732902ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z6fqg (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-z6fqg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  52s (x8 over 8m5s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-65b6p" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context embed-certs-20211231102953-6736 describe pod busybox coredns-64897985d-65b6p storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (485.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (485.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20211231103230-6736 create -f testdata/busybox.yaml
E1231 10:37:39.269448    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [d3676780-07d4-40ac-b050-b1a20395fabb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
E1231 10:38:27.790126    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory
E1231 10:38:29.444396    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
E1231 10:38:57.128668    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
E1231 10:38:57.294314    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: ***** TestStartStop/group/default-k8s-different-port/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:181: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20211231103230-6736 -n default-k8s-different-port-20211231103230-6736
start_stop_delete_test.go:181: TestStartStop/group/default-k8s-different-port/serial/DeployApp: showing logs for failed pods as of 2021-12-31 10:45:39.799142752 +0000 UTC m=+3838.780958711
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20211231103230-6736 describe po busybox -n default
start_stop_delete_test.go:181: (dbg) kubectl --context default-k8s-different-port-20211231103230-6736 describe po busybox -n default:
Name:         busybox
Namespace:    default
Priority:     0
Node:         <none>
Labels:       integration-test=busybox
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t4q6g (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-t4q6g:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age               From               Message
----     ------            ----              ----               -------
Warning  FailedScheduling  45s (x8 over 8m)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20211231103230-6736 logs busybox -n default
start_stop_delete_test.go:181: (dbg) kubectl --context default-k8s-different-port-20211231103230-6736 logs busybox -n default:
start_stop_delete_test.go:181: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect default-k8s-different-port-20211231103230-6736
helpers_test.go:236: (dbg) docker inspect default-k8s-different-port-20211231103230-6736:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1",
	        "Created": "2021-12-31T10:32:50.365330019Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 234235,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-12-31T10:32:50.932223463Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/hostname",
	        "HostsPath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/hosts",
	        "LogPath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1-json.log",
	        "Name": "/default-k8s-different-port-20211231103230-6736",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20211231103230-6736:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20211231103230-6736",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5-init/diff:/var/lib/docker/overlay2/c534a67c1f4c5b9f1d9e8fda7874029a0b48633454e02a0b6c810aa9cf70f36b/diff:/var/lib/docker/overlay2/de7a9fc119b7ade7c9f3b24bea5108d56a2268af67d230db0cccbbf8190cc502/diff:/var/lib/docker/overlay2/7dde3b10db81d42c3216201bd8676123e0e230bfa173b31e053af8fa206a800e/diff:/var/lib/docker/overlay2/97454ec819da74666dcc72f8e5b4653886c44f3699cecf13a018ecf95d935273/diff:/var/lib/docker/overlay2/f63f64cf22a9591cbcafeb284a265882ea890544f2b083b747047bcf47a45f11/diff:/var/lib/docker/overlay2/85a2123491b64d14f7493b3bc6a519d9df28cc3680976b9e5a7ecf783be5a304/diff:/var/lib/docker/overlay2/2ec15009b968007c3be12325edca4c12821b52fc616ace66d456038c7f9b8b26/diff:/var/lib/docker/overlay2/c45e9285ae7064da0be5f07d29b035ad54f64e529db00ffe6f2dce6ad543dccf/diff:/var/lib/docker/overlay2/d95000893ca52351a7804b110df43c1656b4bc7fbe4940c9e0ff2744233b1f93/diff:/var/lib/docker/overlay2/38e3bf
07bc49f88e1e7214644d31095a6178f5156c6a5ff3fbcaf8676336a52d/diff:/var/lib/docker/overlay2/fe45a9a5036ea98542c5fd7808e66f8b6ffe9d0c545de700f7cbe39d6280e0e3/diff:/var/lib/docker/overlay2/8cd90b043b30c66cbeacbe4ca21881d85717c6e16f1fe627e2eecdeaf461da6c/diff:/var/lib/docker/overlay2/7221b66a068ae65d9a1c089fe5c60e7230aa950b3d47eb7e90a1776c2a45931b/diff:/var/lib/docker/overlay2/86ce10ecfb90cfd8e1f7d722715353b823ddfa58b93eefa2c1efffff899916ee/diff:/var/lib/docker/overlay2/12ea1316fade88c6e1d2272d6dbeb629875ff96f4dd6f6ead0ff6a4e7e3a3067/diff:/var/lib/docker/overlay2/0fe276873166ac11302901f1bb2c547c71e23fac2197e717e789b6ce7d1eb3ce/diff:/var/lib/docker/overlay2/81bd863acb94ff576a1f1f046ee87693930a12209e915a09f2a720c77d1357f3/diff:/var/lib/docker/overlay2/8875d87537d51e137e2fbff41fe61dadeefa46057e0c93d257ec4754b5ba95f3/diff:/var/lib/docker/overlay2/86e75ae4c708ad2e78d21326ca3b7f552b7903d8c7a0a66e086feaacd3a8c002/diff:/var/lib/docker/overlay2/c688ba4459332409ab260296b96d566d186d307b81308fb3b59724fe6fed3c7d/diff:/var/lib/d
ocker/overlay2/45c4b946a887c9de4f592591b3e65041083baba75d5bed9b0995339de1cafd02/diff:/var/lib/docker/overlay2/3927af3e4dc5e4af94ac76cbe5a3b9759b80e10646aabc655f5c9b56ae13e650/diff:/var/lib/docker/overlay2/11dd3efafe062edcbd4dd837545d570514492786b08abf77f57fe300d225eaa5/diff:/var/lib/docker/overlay2/b8e0f4d5369eb0c633092d9e21501d99ccb27fafaad6667209e1d988c337ae9b/diff:/var/lib/docker/overlay2/c605476b22b48b266163ca6ee5fcb52b15458f1d7a77458390109b6e795f72f3/diff:/var/lib/docker/overlay2/19b31873fbc8858f6b143ea8dc98572f32110c70c29732285bdea89dd6163287/diff:/var/lib/docker/overlay2/171afcaaccc199dc3a653d1bf42453959e7c6bd7bc1e8340f94454b5f5fb6d2d/diff:/var/lib/docker/overlay2/2000b0fa8d671b6e34ec28228440eb7e6a5ddcd6b2cb657836d8522e3fb4454e/diff:/var/lib/docker/overlay2/1294bf56384681b45b611fe07d7b32c6c87a67eee568452d1f8123bf319f7a4c/diff:/var/lib/docker/overlay2/3199bb762e90968c1998250099f781c0e8954f4d62df39a80a384b9208149421/diff:/var/lib/docker/overlay2/036a54cd701c84ee783285b30f6cf33b92858f1a11ebd5db15cffffea12
adcd9/diff:/var/lib/docker/overlay2/552a166f9d1f0d79a7c45c08db35c6db9366e5d8e1321b62ffc3d8a9ea458797/diff:/var/lib/docker/overlay2/98f34e7af675026bd6dd9aaf43b996b5a992e04fe33f238d16d38e7ccad2fb91/diff:/var/lib/docker/overlay2/c1ed97ca4514bd37713c82daa6ba805285f12b5aa51599e9f915b8755fbf55a7/diff:/var/lib/docker/overlay2/86b8ba00550442fc509be298ff07205e99a171d56a70a2e1735cdfd4a9ab745e/diff:/var/lib/docker/overlay2/109092c1424dfee38a4eb63e39a9bb5d3de4b011f84e30bd021bf73ee2f47aa1/diff:/var/lib/docker/overlay2/87fd60ac9e5dd5a701def2c0a9e1c64463bd4779be06a990c86e5561a4349312/diff:/var/lib/docker/overlay2/145545da7fd578da8a41adec4cec3a9b347b4e609c900ee9365ae7735c7859c6/diff:/var/lib/docker/overlay2/e6a20d33de71269404b7fdc1fa6f915dcbc5a337e0625ef4536cb942bb55c19b/diff:/var/lib/docker/overlay2/38829872ccac03d0dae45b93d1f34e61fd179b6fc19f22fd27d4431ee773090f/diff:/var/lib/docker/overlay2/3c7fa7cc3994b36d6b984ebe7c46b535829f02814130486c4ecf5a0976f4f7dc/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20211231103230-6736",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20211231103230-6736/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20211231103230-6736",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20211231103230-6736",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20211231103230-6736",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "be0a219411bd67bdb3a91065eefcb9498528f3367077de2d90f3a0ebd5f1a6ea",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49412"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49411"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49407"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49410"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49409"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/be0a219411bd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20211231103230-6736": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "282fb8467680",
	                        "default-k8s-different-port-20211231103230-6736"
	                    ],
	                    "NetworkID": "e1788769ca7736a71ee22c1f2c56bcd2d9ff496f9d3c2faac492c32b43c45e2f",
	                    "EndpointID": "3f15aedb2298185e311300c15ed78486951e6e1f525e08afdb042e339fa53d16",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20211231103230-6736 -n default-k8s-different-port-20211231103230-6736
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20211231103230-6736 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-different-port-20211231103230-6736 logs -n 25: (1.275799237s)
helpers_test.go:253: TestStartStop/group/default-k8s-different-port/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                         | disable-driver-mounts-20211231103229-6736      | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:29 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | disable-driver-mounts-20211231103229-6736                  |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:30 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20211231103230-6736 --memory=2200            | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:30 UTC | Fri, 31 Dec 2021 10:33:37 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:37 UTC | Fri, 31 Dec 2021 10:33:38 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:38 UTC | Fri, 31 Dec 2021 10:33:58 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:58 UTC | Fri, 31 Dec 2021 10:33:58 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20211231103230-6736 --memory=2200            | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:58 UTC | Fri, 31 Dec 2021 10:34:51 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:52 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:53 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                            | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:53 UTC | Fri, 31 Dec 2021 10:34:54 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:54 UTC | Fri, 31 Dec 2021 10:34:55 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:56 UTC | Fri, 31 Dec 2021 10:34:59 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:59 UTC | Fri, 31 Dec 2021 10:35:00 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736             | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:37:37 UTC | Fri, 31 Dec 2021 10:37:38 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:02 UTC | Fri, 31 Dec 2021 10:39:03 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:05 UTC | Fri, 31 Dec 2021 10:39:06 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:06 UTC | Fri, 31 Dec 2021 10:39:07 UTC |
	|         | old-k8s-version-20211231102602-6736                        |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:07 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736                        |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:28 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736                        |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                            | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:57 UTC | Fri, 31 Dec 2021 10:42:58 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                            | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:59 UTC | Fri, 31 Dec 2021 10:43:00 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:01 UTC | Fri, 31 Dec 2021 10:43:02 UTC |
	|         | embed-certs-20211231102953-6736                            |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:02 UTC | Fri, 31 Dec 2021 10:43:22 UTC |
	|         | embed-certs-20211231102953-6736                            |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:22 UTC | Fri, 31 Dec 2021 10:43:22 UTC |
	|         | embed-certs-20211231102953-6736                            |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:44:18 UTC | Fri, 31 Dec 2021 10:44:19 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/31 10:43:22
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.17.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1231 10:43:22.844642  253675 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:43:22.844763  253675 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:43:22.844769  253675 out.go:310] Setting ErrFile to fd 2...
	I1231 10:43:22.844775  253675 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:43:22.844954  253675 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:43:22.845319  253675 out.go:304] Setting JSON to false
	I1231 10:43:22.847068  253675 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5157,"bootTime":1640942245,"procs":267,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:43:22.847193  253675 start.go:122] virtualization: kvm guest
	I1231 10:43:22.850701  253675 out.go:176] * [embed-certs-20211231102953-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:43:22.853129  253675 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:43:22.850948  253675 notify.go:174] Checking for updates...
	I1231 10:43:22.855641  253675 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:43:22.857638  253675 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:43:22.860223  253675 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:43:22.862455  253675 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:43:22.862933  253675 config.go:176] Loaded profile config "embed-certs-20211231102953-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:43:22.863367  253675 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:43:22.907454  253675 docker.go:132] docker version: linux-20.10.12
	I1231 10:43:22.907559  253675 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:43:23.010925  253675 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:43:22.94341606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:43:23.011080  253675 docker.go:237] overlay module found
	I1231 10:43:23.014207  253675 out.go:176] * Using the docker driver based on existing profile
	I1231 10:43:23.014243  253675 start.go:280] selected driver: docker
	I1231 10:43:23.014249  253675 start.go:795] validating driver "docker" against &{Name:embed-certs-20211231102953-6736 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true ku
belet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:43:23.014391  253675 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:43:23.014412  253675 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:43:23.014421  253675 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:43:23.014467  253675 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:43:23.014493  253675 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:43:23.017136  253675 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:43:23.017884  253675 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:43:23.116838  253675 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:43:23.05133577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	W1231 10:43:23.116982  253675 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:43:23.117011  253675 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:43:23.119638  253675 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:43:23.119774  253675 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1231 10:43:23.119804  253675 cni.go:93] Creating CNI manager for ""
	I1231 10:43:23.119812  253675 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:43:23.119829  253675 start_flags.go:298] config:
	{Name:embed-certs-20211231102953-6736 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop
:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:43:23.122399  253675 out.go:176] * Starting control plane node embed-certs-20211231102953-6736 in cluster embed-certs-20211231102953-6736
	I1231 10:43:23.122462  253675 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:43:23.124490  253675 out.go:176] * Pulling base image ...
	I1231 10:43:23.124541  253675 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:43:23.124581  253675 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4
	I1231 10:43:23.124590  253675 cache.go:57] Caching tarball of preloaded images
	I1231 10:43:23.124659  253675 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:43:23.124888  253675 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:43:23.124904  253675 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.1 on containerd
	I1231 10:43:23.125057  253675 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/config.json ...
	I1231 10:43:23.163843  253675 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:43:23.163872  253675 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:43:23.163888  253675 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:43:23.163917  253675 start.go:313] acquiring machines lock for embed-certs-20211231102953-6736: {Name:mk30ade561e73ed15bb546a531be6f54b6b9c072 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:43:23.164009  253675 start.go:317] acquired machines lock for "embed-certs-20211231102953-6736" in 74.119µs
	I1231 10:43:23.164031  253675 start.go:93] Skipping create...Using existing machine configuration
	I1231 10:43:23.164039  253675 fix.go:55] fixHost starting: 
	I1231 10:43:23.164295  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:43:23.199236  253675 fix.go:108] recreateIfNeeded on embed-certs-20211231102953-6736: state=Stopped err=<nil>
	W1231 10:43:23.199269  253675 fix.go:134] unexpected machine state, will restart: <nil>
	I1231 10:43:20.417650  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:22.917443  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:23.202320  253675 out.go:176] * Restarting existing docker container for "embed-certs-20211231102953-6736" ...
	I1231 10:43:23.202389  253675 cli_runner.go:133] Run: docker start embed-certs-20211231102953-6736
	I1231 10:43:23.625205  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:43:23.664982  253675 kic.go:420] container "embed-certs-20211231102953-6736" state is running.
	I1231 10:43:23.665431  253675 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211231102953-6736
	I1231 10:43:23.703812  253675 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/config.json ...
	I1231 10:43:23.704117  253675 machine.go:88] provisioning docker machine ...
	I1231 10:43:23.704144  253675 ubuntu.go:169] provisioning hostname "embed-certs-20211231102953-6736"
	I1231 10:43:23.704223  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:23.742698  253675 main.go:130] libmachine: Using SSH client type: native
	I1231 10:43:23.743011  253675 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I1231 10:43:23.743039  253675 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20211231102953-6736 && echo "embed-certs-20211231102953-6736" | sudo tee /etc/hostname
	I1231 10:43:23.743711  253675 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58602->127.0.0.1:49427: read: connection reset by peer
	I1231 10:43:26.891264  253675 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20211231102953-6736
	
	I1231 10:43:26.891349  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:26.930929  253675 main.go:130] libmachine: Using SSH client type: native
	I1231 10:43:26.931119  253675 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I1231 10:43:26.931150  253675 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20211231102953-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20211231102953-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20211231102953-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:43:27.068707  253675 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:43:27.068740  253675 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:43:27.068790  253675 ubuntu.go:177] setting up certificates
	I1231 10:43:27.068818  253675 provision.go:83] configureAuth start
	I1231 10:43:27.068869  253675 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211231102953-6736
	I1231 10:43:27.106090  253675 provision.go:138] copyHostCerts
	I1231 10:43:27.106158  253675 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:43:27.106172  253675 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:43:27.106233  253675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:43:27.106338  253675 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:43:27.106358  253675 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:43:27.106382  253675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:43:27.106444  253675 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:43:27.106453  253675 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:43:27.106472  253675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:43:27.106526  253675 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20211231102953-6736 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20211231102953-6736]
	I1231 10:43:27.255618  253675 provision.go:172] copyRemoteCerts
	I1231 10:43:27.255688  253675 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:43:27.255719  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:27.293465  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:43:27.393419  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:43:27.414701  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I1231 10:43:27.438482  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1231 10:43:27.461503  253675 provision.go:86] duration metric: configureAuth took 392.669293ms
	I1231 10:43:27.461542  253675 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:43:27.461744  253675 config.go:176] Loaded profile config "embed-certs-20211231102953-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:43:27.461759  253675 machine.go:91] provisioned docker machine in 3.757626792s
	I1231 10:43:27.461767  253675 start.go:267] post-start starting for "embed-certs-20211231102953-6736" (driver="docker")
	I1231 10:43:27.461773  253675 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:43:27.461808  253675 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:43:27.461836  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:27.504760  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:43:27.605497  253675 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:43:27.609459  253675 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:43:27.609488  253675 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:43:27.609499  253675 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:43:27.609505  253675 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:43:27.609516  253675 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:43:27.609580  253675 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:43:27.609669  253675 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:43:27.609751  253675 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:43:27.618322  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:43:27.638480  253675 start.go:270] post-start completed in 176.700691ms
	I1231 10:43:27.638544  253675 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:43:27.638578  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:27.678338  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:43:27.773349  253675 fix.go:57] fixHost completed within 4.609301543s
	I1231 10:43:27.773377  253675 start.go:80] releasing machines lock for "embed-certs-20211231102953-6736", held for 4.609356997s
	I1231 10:43:27.773448  253675 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211231102953-6736
	I1231 10:43:24.917803  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:27.417182  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:27.808989  253675 ssh_runner.go:195] Run: systemctl --version
	I1231 10:43:27.809043  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:27.809080  253675 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:43:27.809149  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:27.849360  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:43:27.849710  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:43:27.941036  253675 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:43:27.971837  253675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:43:27.982942  253675 docker.go:158] disabling docker service ...
	I1231 10:43:27.983000  253675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:43:27.994466  253675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:43:28.005201  253675 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:43:28.084281  253675 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:43:28.165947  253675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:43:28.176963  253675 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:43:28.193845  253675 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:43:28.210061  253675 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:43:28.218904  253675 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:43:28.227395  253675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:43:28.309175  253675 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:43:28.390283  253675 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:43:28.390355  253675 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:43:28.396380  253675 start.go:458] Will wait 60s for crictl version
	I1231 10:43:28.396511  253675 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:43:28.426104  253675 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:43:28Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:43:29.418833  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:31.917246  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:33.918423  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:36.418075  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:39.474533  253675 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:43:39.501276  253675 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:43:39.501336  253675 ssh_runner.go:195] Run: containerd --version
	I1231 10:43:39.527002  253675 ssh_runner.go:195] Run: containerd --version
	I1231 10:43:39.551133  253675 out.go:176] * Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	I1231 10:43:39.551225  253675 cli_runner.go:133] Run: docker network inspect embed-certs-20211231102953-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:43:39.587623  253675 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1231 10:43:39.591336  253675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:43:39.604414  253675 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:43:39.606523  253675 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:43:39.608679  253675 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:43:39.608778  253675 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:43:39.608844  253675 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:43:39.634556  253675 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:43:39.634585  253675 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:43:39.634630  253675 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:43:39.662182  253675 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:43:39.662208  253675 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:43:39.662251  253675 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:43:39.687863  253675 cni.go:93] Creating CNI manager for ""
	I1231 10:43:39.687887  253675 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:43:39.687902  253675 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:43:39.687916  253675 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20211231102953-6736 NodeName:embed-certs-20211231102953-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:43:39.688044  253675 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20211231102953-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:43:39.688123  253675 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=embed-certs-20211231102953-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1231 10:43:39.688169  253675 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.1
	I1231 10:43:39.696210  253675 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:43:39.696312  253675 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:43:39.704267  253675 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (638 bytes)
	I1231 10:43:39.718589  253675 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1231 10:43:39.734360  253675 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2081 bytes)
	I1231 10:43:39.749132  253675 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:43:39.753026  253675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:43:39.764036  253675 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736 for IP: 192.168.58.2
	I1231 10:43:39.764162  253675 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:43:39.764206  253675 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:43:39.764332  253675 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/client.key
	I1231 10:43:39.764393  253675 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.key.cee25041
	I1231 10:43:39.764430  253675 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.key
	I1231 10:43:39.764535  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:43:39.764569  253675 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:43:39.764576  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:43:39.764600  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:43:39.764619  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:43:39.764640  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:43:39.764679  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:43:39.765624  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:43:39.786589  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1231 10:43:39.806214  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:43:39.827437  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1231 10:43:39.847223  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:43:39.869717  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:43:39.892296  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:43:39.915269  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:43:39.940596  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:43:39.965015  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:43:39.987472  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:43:40.008065  253675 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:43:40.023700  253675 ssh_runner.go:195] Run: openssl version
	I1231 10:43:40.029648  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:43:40.038817  253675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:43:40.042994  253675 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:43:40.043064  253675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:43:40.049114  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:43:40.057598  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:43:40.067157  253675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:43:40.071141  253675 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:43:40.071208  253675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:43:40.077176  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:43:40.085041  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:43:40.093428  253675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:43:40.097387  253675 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:43:40.097447  253675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:43:40.102969  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:43:40.110890  253675 kubeadm.go:388] StartCluster: {Name:embed-certs-20211231102953-6736 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_read
y:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:43:40.110993  253675 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:43:40.111061  253675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:43:40.137789  253675 cri.go:87] found id: "c4090927d59b8d0231d9972079e3b14697c8f3127d96ddaed42ac933ada12239"
	I1231 10:43:40.137839  253675 cri.go:87] found id: "03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f"
	I1231 10:43:40.137847  253675 cri.go:87] found id: "a380c0d98153cc2ab6095cc8b6eaf0192543775721df6c9cb6c5e4e510d1636d"
	I1231 10:43:40.137854  253675 cri.go:87] found id: "7de9215e7da17128bb40a41f2f520ecf92f3eb1e98a7c3510a228471a97c4f2f"
	I1231 10:43:40.137861  253675 cri.go:87] found id: "bd3c847642a9fe6d581ed824b26b0b73d8344e03d63122f560cf62e61a262cb3"
	I1231 10:43:40.137868  253675 cri.go:87] found id: "4d064efc3679beff93c0b83a48f6fdc82cb98e282e856beb780502ca855801a3"
	I1231 10:43:40.137875  253675 cri.go:87] found id: "eb97b3087125dceba3df5197317e35791b4ca2f794effdfa4d5118543d9d3072"
	I1231 10:43:40.137883  253675 cri.go:87] found id: ""
	I1231 10:43:40.137935  253675 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1231 10:43:40.152791  253675 cri.go:114] JSON = null
	W1231 10:43:40.152840  253675 kubeadm.go:395] unpause failed: list paused: list returned 0 containers, but ps returned 7
	I1231 10:43:40.152904  253675 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:43:40.161320  253675 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1231 10:43:40.168524  253675 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:40.169402  253675 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20211231102953-6736" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:43:40.169777  253675 kubeconfig.go:127] "embed-certs-20211231102953-6736" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig - will repair!
	I1231 10:43:40.170362  253675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:43:40.172686  253675 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1231 10:43:40.180820  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:40.180878  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:40.195345  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:40.395812  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:40.395894  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:40.412496  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:40.595573  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:40.595660  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:40.610754  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:40.795993  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:40.796074  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:40.812031  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:40.996111  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:40.996181  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:41.011422  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:41.195687  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:41.195777  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:41.211094  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:41.396304  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:41.396402  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:41.413206  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:41.595459  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:41.595552  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:41.611792  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:41.796061  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:41.796162  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:41.811694  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:41.995913  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:41.995991  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:42.013353  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:42.195522  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:42.195645  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:42.212206  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:42.396496  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:42.396584  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:42.414476  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:42.595660  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:42.595748  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:42.611643  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:38.917717  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:41.417106  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:42.796314  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:42.797268  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:42.814486  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:42.995563  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:42.995659  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:43.011246  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:43.195463  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:43.195552  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:43.211623  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:43.211657  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:43.211698  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:43.227133  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W1231 10:43:43.227160  253675 kubeadm.go:600] needs reconfigure: apiserver error: timed out waiting for the condition
	I1231 10:43:43.227194  253675 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1231 10:43:43.962869  253675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:43:43.975076  253675 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:43:43.984301  253675 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:43:43.984358  253675 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:43:43.992612  253675 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:43:43.992653  253675 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:43:44.298681  253675 out.go:203]   - Generating certificates and keys ...
	I1231 10:43:45.481520  253675 out.go:203]   - Booting up control plane ...
	I1231 10:43:43.418275  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:45.419818  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:47.917950  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:50.417701  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:52.418425  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:54.917711  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:57.416994  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:58.029054  253675 out.go:203]   - Configuring RBAC rules ...
	I1231 10:43:58.483326  253675 cni.go:93] Creating CNI manager for ""
	I1231 10:43:58.483357  253675 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:43:58.488726  253675 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:43:58.488827  253675 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:43:58.493521  253675 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.1/kubectl ...
	I1231 10:43:58.493558  253675 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:43:58.512259  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:43:59.192508  253675 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:43:59.192665  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:43:59.192694  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=embed-certs-20211231102953-6736 minikube.k8s.io/updated_at=2021_12_31T10_43_59_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:43:59.213675  253675 ops.go:34] apiserver oom_adj: -16
	I1231 10:43:59.322906  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:43:59.895946  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:00.395692  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:00.895655  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:01.395632  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:01.896412  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:02.395407  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:43:59.418293  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:01.918364  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:02.896452  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:03.396392  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:03.895366  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:04.396336  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:04.895859  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:05.395565  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:05.895587  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:06.395343  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:06.895271  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:07.395321  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:03.922878  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:06.417497  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:07.896038  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:08.396421  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:08.895347  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:09.395521  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:09.895830  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:10.395802  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:10.501742  253675 kubeadm.go:864] duration metric: took 11.30912297s to wait for elevateKubeSystemPrivileges.
	I1231 10:44:10.501806  253675 kubeadm.go:390] StartCluster complete in 30.390935465s
	I1231 10:44:10.501833  253675 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:44:10.501996  253675 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:44:10.504119  253675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:44:11.026995  253675 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20211231102953-6736" rescaled to 1
	I1231 10:44:11.027076  253675 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:44:11.030066  253675 out.go:176] * Verifying Kubernetes components...
	I1231 10:44:11.027250  253675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:44:11.027529  253675 config.go:176] Loaded profile config "embed-certs-20211231102953-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:44:11.027546  253675 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I1231 10:44:11.030253  253675 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20211231102953-6736"
	I1231 10:44:11.030274  253675 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20211231102953-6736"
	W1231 10:44:11.030298  253675 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:44:11.030347  253675 host.go:66] Checking if "embed-certs-20211231102953-6736" exists ...
	I1231 10:44:11.030509  253675 addons.go:65] Setting dashboard=true in profile "embed-certs-20211231102953-6736"
	I1231 10:44:11.030533  253675 addons.go:153] Setting addon dashboard=true in "embed-certs-20211231102953-6736"
	I1231 10:44:11.030537  253675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W1231 10:44:11.030543  253675 addons.go:165] addon dashboard should already be in state true
	I1231 10:44:11.030572  253675 host.go:66] Checking if "embed-certs-20211231102953-6736" exists ...
	I1231 10:44:11.030638  253675 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20211231102953-6736"
	I1231 10:44:11.030651  253675 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20211231102953-6736"
	I1231 10:44:11.030970  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:44:11.031137  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:44:11.031253  253675 addons.go:65] Setting metrics-server=true in profile "embed-certs-20211231102953-6736"
	I1231 10:44:11.031280  253675 addons.go:153] Setting addon metrics-server=true in "embed-certs-20211231102953-6736"
	W1231 10:44:11.031288  253675 addons.go:165] addon metrics-server should already be in state true
	I1231 10:44:11.031157  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:44:11.031313  253675 host.go:66] Checking if "embed-certs-20211231102953-6736" exists ...
	I1231 10:44:11.031695  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:44:11.094017  253675 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1231 10:44:11.101006  253675 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1231 10:44:11.100849  253675 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20211231102953-6736" to be "Ready" ...
	I1231 10:44:11.102159  253675 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1231 10:44:11.112212  253675 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:44:11.109078  253675 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20211231102953-6736"
	I1231 10:44:11.109381  253675 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1231 10:44:11.109540  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	W1231 10:44:11.112312  253675 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:44:11.112367  253675 host.go:66] Checking if "embed-certs-20211231102953-6736" exists ...
	I1231 10:44:11.112422  253675 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:44:11.112434  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:44:11.112456  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1231 10:44:11.112471  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1231 10:44:11.112489  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:44:11.112497  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:44:11.112389  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:44:11.112946  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:44:11.168999  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:44:11.169017  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:44:11.169333  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:44:11.171810  253675 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:44:11.171837  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:44:11.171897  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:44:11.220524  253675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:44:11.225393  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:44:11.379784  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1231 10:44:11.379826  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1231 10:44:11.383351  253675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:44:11.479751  253675 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1231 10:44:11.479789  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1231 10:44:11.481212  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1231 10:44:11.481233  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1231 10:44:11.581046  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1231 10:44:11.581120  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1231 10:44:11.582256  253675 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1231 10:44:11.582344  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1231 10:44:11.587594  253675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:44:11.679970  253675 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:44:11.680004  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1231 10:44:11.682163  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1231 10:44:11.682192  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1231 10:44:11.791172  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1231 10:44:11.791211  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1231 10:44:11.791675  253675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:44:11.895732  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1231 10:44:11.895775  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1231 10:44:11.995782  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1231 10:44:11.995814  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1231 10:44:12.085513  253675 start.go:773] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I1231 10:44:12.099705  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1231 10:44:12.099792  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1231 10:44:12.194606  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:44:12.194725  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1231 10:44:12.297993  253675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:44:12.500547  253675 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.117122608s)
	I1231 10:44:08.419100  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:10.918011  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:12.919429  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:12.995685  253675 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.203961788s)
	I1231 10:44:12.995735  253675 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20211231102953-6736"
	I1231 10:44:13.179949  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:14.102814  253675 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.804765707s)
	I1231 10:44:14.106088  253675 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1231 10:44:14.106137  253675 addons.go:417] enableAddons completed in 3.078602112s
	I1231 10:44:15.629711  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:17.630121  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:15.417126  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:17.917105  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:17.920187  248388 node_ready.go:38] duration metric: took 4m0.011176212s waiting for node "old-k8s-version-20211231102602-6736" to be "Ready" ...
	I1231 10:44:17.924079  248388 out.go:176] 
	W1231 10:44:17.924288  248388 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:44:17.924318  248388 out.go:241] * 
	W1231 10:44:17.925165  248388 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1231 10:44:20.129390  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:22.129653  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:24.629739  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:27.129548  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:29.129623  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:31.130167  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:33.629367  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:35.630512  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:38.129072  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:40.129862  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:42.628942  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:44.630077  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:47.129721  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:49.629888  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:52.129324  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:54.129718  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:56.629788  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:59.129021  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:01.129651  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:03.629842  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:05.629877  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:08.128850  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:10.129558  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:12.629587  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:14.629796  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:17.129902  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:19.629779  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:22.129932  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:24.630806  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:27.130380  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:29.629652  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:31.629743  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:34.129422  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:36.629867  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	2831ff3abf5d3       6de166512aa22       5 minutes ago       Exited              kindnet-cni               6                   2de2afafee004
	bd73d75d2e911       b46c42588d511       12 minutes ago      Running             kube-proxy                0                   92acbaf1e0f9d
	6f1fab877ff5d       b6d7abedde399       12 minutes ago      Running             kube-apiserver            0                   4bf0c162f76ea
	631b3be24dd2b       25f8c7f3da61c       12 minutes ago      Running             etcd                      0                   8150f672d9df2
	a578cbf12a8e4       f51846a4fd288       12 minutes ago      Running             kube-controller-manager   0                   8046160d707ed
	78f1ab230e901       71d575efe6283       12 minutes ago      Running             kube-scheduler            0                   7a99114bf9d50
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-12-31 10:32:51 UTC, end at Fri 2021-12-31 10:45:41 UTC. --
	Dec 31 10:35:59 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:35:59.232114334Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:35:59 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:35:59.955849355Z" level=info msg="RemoveContainer for \"d09a1a150a26b2a1fafcaf047e419fe2b681dcee1016ffc3393116bb6a72be46\""
	Dec 31 10:35:59 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:35:59.962860726Z" level=info msg="RemoveContainer for \"d09a1a150a26b2a1fafcaf047e419fe2b681dcee1016ffc3393116bb6a72be46\" returns successfully"
	Dec 31 10:37:30 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:30.625457193Z" level=info msg="CreateContainer within sandbox \"2de2afafee0046e35af925d4b9ad9d07f1c71a4591a74f81ec7b10dd8bd4a633\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:5,}"
	Dec 31 10:37:30 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:30.653652653Z" level=info msg="CreateContainer within sandbox \"2de2afafee0046e35af925d4b9ad9d07f1c71a4591a74f81ec7b10dd8bd4a633\" for &ContainerMetadata{Name:kindnet-cni,Attempt:5,} returns container id \"82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523\""
	Dec 31 10:37:30 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:30.654422181Z" level=info msg="StartContainer for \"82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523\""
	Dec 31 10:37:30 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:30.903113456Z" level=info msg="StartContainer for \"82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523\" returns successfully"
	Dec 31 10:37:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:41.196090794Z" level=info msg="Finish piping stdout of container \"82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523\""
	Dec 31 10:37:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:41.196091034Z" level=info msg="Finish piping stderr of container \"82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523\""
	Dec 31 10:37:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:41.196929815Z" level=info msg="TaskExit event &TaskExit{ContainerID:82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523,ID:82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523,Pid:2291,ExitStatus:2,ExitedAt:2021-12-31 10:37:41.196610758 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:37:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:41.234636261Z" level=info msg="shim disconnected" id=82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523
	Dec 31 10:37:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:41.234754150Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:37:42 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:42.146475323Z" level=info msg="RemoveContainer for \"63b83e6e86d6f505da0689a06531b98663987ffbd9fbadda5ee9e51e682092e1\""
	Dec 31 10:37:42 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:42.153143179Z" level=info msg="RemoveContainer for \"63b83e6e86d6f505da0689a06531b98663987ffbd9fbadda5ee9e51e682092e1\" returns successfully"
	Dec 31 10:40:30 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:40:30.625286175Z" level=info msg="CreateContainer within sandbox \"2de2afafee0046e35af925d4b9ad9d07f1c71a4591a74f81ec7b10dd8bd4a633\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:6,}"
	Dec 31 10:40:30 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:40:30.653191231Z" level=info msg="CreateContainer within sandbox \"2de2afafee0046e35af925d4b9ad9d07f1c71a4591a74f81ec7b10dd8bd4a633\" for &ContainerMetadata{Name:kindnet-cni,Attempt:6,} returns container id \"2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29\""
	Dec 31 10:40:30 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:40:30.653987597Z" level=info msg="StartContainer for \"2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29\""
	Dec 31 10:40:30 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:40:30.888030955Z" level=info msg="StartContainer for \"2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29\" returns successfully"
	Dec 31 10:40:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:40:41.186125929Z" level=info msg="Finish piping stderr of container \"2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29\""
	Dec 31 10:40:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:40:41.186157192Z" level=info msg="Finish piping stdout of container \"2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29\""
	Dec 31 10:40:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:40:41.187028085Z" level=info msg="TaskExit event &TaskExit{ContainerID:2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29,ID:2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29,Pid:2611,ExitStatus:2,ExitedAt:2021-12-31 10:40:41.186724899 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:40:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:40:41.215211769Z" level=info msg="shim disconnected" id=2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29
	Dec 31 10:40:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:40:41.215300009Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:40:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:40:41.497569419Z" level=info msg="RemoveContainer for \"82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523\""
	Dec 31 10:40:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:40:41.503450657Z" level=info msg="RemoveContainer for \"82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20211231103230-6736
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20211231103230-6736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb
	                    minikube.k8s.io/name=default-k8s-different-port-20211231103230-6736
	                    minikube.k8s.io/updated_at=2021_12_31T10_33_24_0700
	                    minikube.k8s.io/version=v1.24.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 Dec 2021 10:33:20 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20211231103230-6736
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 Dec 2021 10:45:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 Dec 2021 10:43:42 +0000   Fri, 31 Dec 2021 10:33:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 Dec 2021 10:43:42 +0000   Fri, 31 Dec 2021 10:33:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 Dec 2021 10:43:42 +0000   Fri, 31 Dec 2021 10:33:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 31 Dec 2021 10:43:42 +0000   Fri, 31 Dec 2021 10:33:16 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    default-k8s-different-port-20211231103230-6736
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	  System UUID:                60ec9bed-9ff2-4db1-b438-2738c19f5f1f
	  Boot ID:                    e6f4bca3-a61e-4e63-a68d-0c73a5c2c999
	  Kernel Version:             5.11.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.12
	  Kubelet Version:            v1.23.1
	  Kube-Proxy Version:         v1.23.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20211231103230-6736                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-rgq8t                                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-default-k8s-different-port-20211231103230-6736             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-default-k8s-different-port-20211231103230-6736    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-z25nr                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-default-k8s-different-port-20211231103230-6736             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 12m                kube-proxy  
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x3 over 12m)  kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 da bc c6 d3 d8 08 06
	[  +0.905080] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vetha337dd59
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 d5 c5 d8 9b b7 08 06
	[  +0.767220] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethad0df3e8
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 45 74 46 b2 ed 08 06
	[  +0.175235] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethfa30cc3c
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 f5 8d 27 39 52 08 06
	[  +2.267937] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.022406] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.024047] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.951751] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.011878] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023951] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.963808] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.003832] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023918] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	
	* 
	* ==> etcd [631b3be24dd2b7667fa37162bef86df2c43e5cf7b5aac17096ee8f79bf749549] <==
	* {"level":"info","ts":"2021-12-31T10:33:16.514Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-12-31T10:33:16.514Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2021-12-31T10:33:16.514Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-12-31T10:33:16.514Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2021-12-31T10:33:16.514Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-12-31T10:33:17.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2021-12-31T10:33:17.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2021-12-31T10:33:17.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2021-12-31T10:33:17.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2021-12-31T10:33:17.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2021-12-31T10:33:17.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2021-12-31T10:33:17.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2021-12-31T10:33:17.139Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:33:17.139Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:33:17.139Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:default-k8s-different-port-20211231103230-6736 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2021-12-31T10:33:17.139Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:33:17.139Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-12-31T10:33:17.139Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-12-31T10:33:17.140Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-12-31T10:33:17.141Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:33:17.141Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:33:17.141Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:33:17.179Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2021-12-31T10:43:17.779Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":588}
	{"level":"info","ts":"2021-12-31T10:43:17.780Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":588,"took":"595.668µs"}
	
	* 
	* ==> kernel <==
	*  10:45:41 up  1:28,  0 users,  load average: 1.03, 1.08, 1.72
	Linux default-k8s-different-port-20211231103230-6736 5.11.0-1023-gcp #25~20.04.1-Ubuntu SMP Mon Nov 15 15:54:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [6f1fab877ff5d05fc062729318a8cb0acd49484779378aa54df1414cc54b604e] <==
	* I1231 10:33:20.079343       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I1231 10:33:20.080322       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I1231 10:33:20.080422       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1231 10:33:20.080482       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1231 10:33:20.080553       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1231 10:33:20.080593       1 cache.go:39] Caches are synced for autoregister controller
	I1231 10:33:20.920577       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1231 10:33:20.920611       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1231 10:33:20.941744       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I1231 10:33:20.945455       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I1231 10:33:20.945476       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I1231 10:33:21.601484       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1231 10:33:21.639873       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1231 10:33:21.720429       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1231 10:33:21.727635       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I1231 10:33:21.728837       1 controller.go:611] quota admission added evaluator for: endpoints
	I1231 10:33:21.735242       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1231 10:33:22.098268       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1231 10:33:23.373002       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1231 10:33:23.385196       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1231 10:33:23.401377       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1231 10:33:28.493494       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1231 10:33:35.559469       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1231 10:33:35.607401       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1231 10:33:37.326422       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [a578cbf12a8e43444eec092845ea14c9a2f682965d287a9801a49dd0a7a8f11f] <==
	* I1231 10:33:34.946962       1 shared_informer.go:247] Caches are synced for GC 
	I1231 10:33:34.947085       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I1231 10:33:34.946811       1 shared_informer.go:247] Caches are synced for cronjob 
	I1231 10:33:34.947601       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I1231 10:33:34.947738       1 shared_informer.go:247] Caches are synced for persistent volume 
	I1231 10:33:34.947910       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I1231 10:33:34.950660       1 shared_informer.go:247] Caches are synced for job 
	I1231 10:33:34.954003       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I1231 10:33:34.956287       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I1231 10:33:34.956482       1 shared_informer.go:247] Caches are synced for PVC protection 
	I1231 10:33:35.121007       1 shared_informer.go:247] Caches are synced for disruption 
	I1231 10:33:35.121046       1 disruption.go:371] Sending events to api server.
	I1231 10:33:35.157911       1 shared_informer.go:247] Caches are synced for resource quota 
	I1231 10:33:35.163115       1 shared_informer.go:247] Caches are synced for resource quota 
	I1231 10:33:35.205005       1 shared_informer.go:247] Caches are synced for stateful set 
	I1231 10:33:35.566040       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I1231 10:33:35.591164       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1231 10:33:35.597663       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1231 10:33:35.597700       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1231 10:33:35.621312       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-z25nr"
	I1231 10:33:35.624088       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rgq8t"
	I1231 10:33:35.860532       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-5t6ck"
	I1231 10:33:35.890918       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-hkl6w"
	I1231 10:33:35.942712       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I1231 10:33:35.986134       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-5t6ck"
	
	* 
	* ==> kube-proxy [bd73d75d2e911560016e85d3108966fc37ee8b4b3657740aad59a2ef691ec869] <==
	* I1231 10:33:37.213196       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I1231 10:33:37.213313       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I1231 10:33:37.213369       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1231 10:33:37.321972       1 server_others.go:206] "Using iptables Proxier"
	I1231 10:33:37.322042       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1231 10:33:37.322055       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1231 10:33:37.322072       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1231 10:33:37.322557       1 server.go:656] "Version info" version="v1.23.1"
	I1231 10:33:37.323369       1 config.go:317] "Starting service config controller"
	I1231 10:33:37.323386       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1231 10:33:37.323454       1 config.go:226] "Starting endpoint slice config controller"
	I1231 10:33:37.323461       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1231 10:33:37.424281       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1231 10:33:37.424342       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [78f1ab230e90175601afd355c7e642dc2f7730a2c82d7591eb54c79c0cd8fdcb] <==
	* W1231 10:33:20.086374       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:33:20.086415       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1231 10:33:20.086485       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:33:20.086591       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1231 10:33:20.087225       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:33:20.087307       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1231 10:33:20.933451       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1231 10:33:20.933497       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1231 10:33:20.977195       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:33:20.977245       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1231 10:33:21.045273       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1231 10:33:21.045313       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1231 10:33:21.102118       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:33:21.102162       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1231 10:33:21.139854       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:33:21.139888       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1231 10:33:21.193715       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1231 10:33:21.193755       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1231 10:33:21.315680       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1231 10:33:21.315727       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1231 10:33:21.315890       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:33:21.315957       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1231 10:33:21.399055       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:33:21.399096       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1231 10:33:21.711242       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-12-31 10:32:51 UTC, end at Fri 2021-12-31 10:45:41 UTC. --
	Dec 31 10:44:33 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:44:33.623007    1282 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-rgq8t_kube-system(9e13c1c9-b40e-4ddc-8914-be909a764fa4)\"" pod="kube-system/kindnet-rgq8t" podUID=9e13c1c9-b40e-4ddc-8914-be909a764fa4
	Dec 31 10:44:33 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:44:33.889754    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:44:38 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:44:38.890510    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:44:43 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:44:43.891660    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:44:48 default-k8s-different-port-20211231103230-6736 kubelet[1282]: I1231 10:44:48.622983    1282 scope.go:110] "RemoveContainer" containerID="2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29"
	Dec 31 10:44:48 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:44:48.623487    1282 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-rgq8t_kube-system(9e13c1c9-b40e-4ddc-8914-be909a764fa4)\"" pod="kube-system/kindnet-rgq8t" podUID=9e13c1c9-b40e-4ddc-8914-be909a764fa4
	Dec 31 10:44:48 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:44:48.893104    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:44:53 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:44:53.894453    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:44:58 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:44:58.895595    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:45:01 default-k8s-different-port-20211231103230-6736 kubelet[1282]: I1231 10:45:01.623342    1282 scope.go:110] "RemoveContainer" containerID="2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29"
	Dec 31 10:45:01 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:01.623696    1282 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-rgq8t_kube-system(9e13c1c9-b40e-4ddc-8914-be909a764fa4)\"" pod="kube-system/kindnet-rgq8t" podUID=9e13c1c9-b40e-4ddc-8914-be909a764fa4
	Dec 31 10:45:03 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:03.897107    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:45:08 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:08.898745    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:45:13 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:13.899729    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:45:14 default-k8s-different-port-20211231103230-6736 kubelet[1282]: I1231 10:45:14.623024    1282 scope.go:110] "RemoveContainer" containerID="2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29"
	Dec 31 10:45:14 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:14.623462    1282 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-rgq8t_kube-system(9e13c1c9-b40e-4ddc-8914-be909a764fa4)\"" pod="kube-system/kindnet-rgq8t" podUID=9e13c1c9-b40e-4ddc-8914-be909a764fa4
	Dec 31 10:45:18 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:18.900961    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:45:23 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:23.902829    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:45:26 default-k8s-different-port-20211231103230-6736 kubelet[1282]: I1231 10:45:26.622399    1282 scope.go:110] "RemoveContainer" containerID="2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29"
	Dec 31 10:45:26 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:26.622837    1282 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-rgq8t_kube-system(9e13c1c9-b40e-4ddc-8914-be909a764fa4)\"" pod="kube-system/kindnet-rgq8t" podUID=9e13c1c9-b40e-4ddc-8914-be909a764fa4
	Dec 31 10:45:28 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:28.903591    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:45:33 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:33.905172    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:45:37 default-k8s-different-port-20211231103230-6736 kubelet[1282]: I1231 10:45:37.622799    1282 scope.go:110] "RemoveContainer" containerID="2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29"
	Dec 31 10:45:37 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:37.623097    1282 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-rgq8t_kube-system(9e13c1c9-b40e-4ddc-8914-be909a764fa4)\"" pod="kube-system/kindnet-rgq8t" podUID=9e13c1c9-b40e-4ddc-8914-be909a764fa4
	Dec 31 10:45:38 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:38.906480    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20211231103230-6736 -n default-k8s-different-port-20211231103230-6736
helpers_test.go:262: (dbg) Run:  kubectl --context default-k8s-different-port-20211231103230-6736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: busybox coredns-64897985d-hkl6w storage-provisioner
helpers_test.go:273: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20211231103230-6736 describe pod busybox coredns-64897985d-hkl6w storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20211231103230-6736 describe pod busybox coredns-64897985d-hkl6w storage-provisioner: exit status 1 (66.147018ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t4q6g (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-t4q6g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  48s (x8 over 8m3s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-hkl6w" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context default-k8s-different-port-20211231103230-6736 describe pod busybox coredns-64897985d-hkl6w storage-provisioner: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect default-k8s-different-port-20211231103230-6736
helpers_test.go:236: (dbg) docker inspect default-k8s-different-port-20211231103230-6736:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1",
	        "Created": "2021-12-31T10:32:50.365330019Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 234235,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-12-31T10:32:50.932223463Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/hostname",
	        "HostsPath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/hosts",
	        "LogPath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1-json.log",
	        "Name": "/default-k8s-different-port-20211231103230-6736",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20211231103230-6736:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20211231103230-6736",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5-init/diff:/var/lib/docker/overlay2/c534a67c1f4c5b9f1d9e8fda7874029a0b48633454e02a0b6c810aa9cf70f36b/diff:/var/lib/docker/overlay2/de7a9fc119b7ade7c9f3b24bea5108d56a2268af67d230db0cccbbf8190cc502/diff:/var/lib/docker/overlay2/7dde3b10db81d42c3216201bd8676123e0e230bfa173b31e053af8fa206a800e/diff:/var/lib/docker/overlay2/97454ec819da74666dcc72f8e5b4653886c44f3699cecf13a018ecf95d935273/diff:/var/lib/docker/overlay2/f63f64cf22a9591cbcafeb284a265882ea890544f2b083b747047bcf47a45f11/diff:/var/lib/docker/overlay2/85a2123491b64d14f7493b3bc6a519d9df28cc3680976b9e5a7ecf783be5a304/diff:/var/lib/docker/overlay2/2ec15009b968007c3be12325edca4c12821b52fc616ace66d456038c7f9b8b26/diff:/var/lib/docker/overlay2/c45e9285ae7064da0be5f07d29b035ad54f64e529db00ffe6f2dce6ad543dccf/diff:/var/lib/docker/overlay2/d95000893ca52351a7804b110df43c1656b4bc7fbe4940c9e0ff2744233b1f93/diff:/var/lib/docker/overlay2/38e3bf
07bc49f88e1e7214644d31095a6178f5156c6a5ff3fbcaf8676336a52d/diff:/var/lib/docker/overlay2/fe45a9a5036ea98542c5fd7808e66f8b6ffe9d0c545de700f7cbe39d6280e0e3/diff:/var/lib/docker/overlay2/8cd90b043b30c66cbeacbe4ca21881d85717c6e16f1fe627e2eecdeaf461da6c/diff:/var/lib/docker/overlay2/7221b66a068ae65d9a1c089fe5c60e7230aa950b3d47eb7e90a1776c2a45931b/diff:/var/lib/docker/overlay2/86ce10ecfb90cfd8e1f7d722715353b823ddfa58b93eefa2c1efffff899916ee/diff:/var/lib/docker/overlay2/12ea1316fade88c6e1d2272d6dbeb629875ff96f4dd6f6ead0ff6a4e7e3a3067/diff:/var/lib/docker/overlay2/0fe276873166ac11302901f1bb2c547c71e23fac2197e717e789b6ce7d1eb3ce/diff:/var/lib/docker/overlay2/81bd863acb94ff576a1f1f046ee87693930a12209e915a09f2a720c77d1357f3/diff:/var/lib/docker/overlay2/8875d87537d51e137e2fbff41fe61dadeefa46057e0c93d257ec4754b5ba95f3/diff:/var/lib/docker/overlay2/86e75ae4c708ad2e78d21326ca3b7f552b7903d8c7a0a66e086feaacd3a8c002/diff:/var/lib/docker/overlay2/c688ba4459332409ab260296b96d566d186d307b81308fb3b59724fe6fed3c7d/diff:/var/lib/d
ocker/overlay2/45c4b946a887c9de4f592591b3e65041083baba75d5bed9b0995339de1cafd02/diff:/var/lib/docker/overlay2/3927af3e4dc5e4af94ac76cbe5a3b9759b80e10646aabc655f5c9b56ae13e650/diff:/var/lib/docker/overlay2/11dd3efafe062edcbd4dd837545d570514492786b08abf77f57fe300d225eaa5/diff:/var/lib/docker/overlay2/b8e0f4d5369eb0c633092d9e21501d99ccb27fafaad6667209e1d988c337ae9b/diff:/var/lib/docker/overlay2/c605476b22b48b266163ca6ee5fcb52b15458f1d7a77458390109b6e795f72f3/diff:/var/lib/docker/overlay2/19b31873fbc8858f6b143ea8dc98572f32110c70c29732285bdea89dd6163287/diff:/var/lib/docker/overlay2/171afcaaccc199dc3a653d1bf42453959e7c6bd7bc1e8340f94454b5f5fb6d2d/diff:/var/lib/docker/overlay2/2000b0fa8d671b6e34ec28228440eb7e6a5ddcd6b2cb657836d8522e3fb4454e/diff:/var/lib/docker/overlay2/1294bf56384681b45b611fe07d7b32c6c87a67eee568452d1f8123bf319f7a4c/diff:/var/lib/docker/overlay2/3199bb762e90968c1998250099f781c0e8954f4d62df39a80a384b9208149421/diff:/var/lib/docker/overlay2/036a54cd701c84ee783285b30f6cf33b92858f1a11ebd5db15cffffea12
adcd9/diff:/var/lib/docker/overlay2/552a166f9d1f0d79a7c45c08db35c6db9366e5d8e1321b62ffc3d8a9ea458797/diff:/var/lib/docker/overlay2/98f34e7af675026bd6dd9aaf43b996b5a992e04fe33f238d16d38e7ccad2fb91/diff:/var/lib/docker/overlay2/c1ed97ca4514bd37713c82daa6ba805285f12b5aa51599e9f915b8755fbf55a7/diff:/var/lib/docker/overlay2/86b8ba00550442fc509be298ff07205e99a171d56a70a2e1735cdfd4a9ab745e/diff:/var/lib/docker/overlay2/109092c1424dfee38a4eb63e39a9bb5d3de4b011f84e30bd021bf73ee2f47aa1/diff:/var/lib/docker/overlay2/87fd60ac9e5dd5a701def2c0a9e1c64463bd4779be06a990c86e5561a4349312/diff:/var/lib/docker/overlay2/145545da7fd578da8a41adec4cec3a9b347b4e609c900ee9365ae7735c7859c6/diff:/var/lib/docker/overlay2/e6a20d33de71269404b7fdc1fa6f915dcbc5a337e0625ef4536cb942bb55c19b/diff:/var/lib/docker/overlay2/38829872ccac03d0dae45b93d1f34e61fd179b6fc19f22fd27d4431ee773090f/diff:/var/lib/docker/overlay2/3c7fa7cc3994b36d6b984ebe7c46b535829f02814130486c4ecf5a0976f4f7dc/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20211231103230-6736",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20211231103230-6736/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20211231103230-6736",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20211231103230-6736",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20211231103230-6736",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "be0a219411bd67bdb3a91065eefcb9498528f3367077de2d90f3a0ebd5f1a6ea",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49412"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49411"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49407"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49410"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49409"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/be0a219411bd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20211231103230-6736": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "282fb8467680",
	                        "default-k8s-different-port-20211231103230-6736"
	                    ],
	                    "NetworkID": "e1788769ca7736a71ee22c1f2c56bcd2d9ff496f9d3c2faac492c32b43c45e2f",
	                    "EndpointID": "3f15aedb2298185e311300c15ed78486951e6e1f525e08afdb042e339fa53d16",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20211231103230-6736 -n default-k8s-different-port-20211231103230-6736
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20211231103230-6736 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-different-port-20211231103230-6736 logs -n 25: (1.076507435s)
E1231 10:45:43.945430    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory
helpers_test.go:253: TestStartStop/group/default-k8s-different-port/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:30 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20211231103230-6736 --memory=2200            | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:30 UTC | Fri, 31 Dec 2021 10:33:37 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:37 UTC | Fri, 31 Dec 2021 10:33:38 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:38 UTC | Fri, 31 Dec 2021 10:33:58 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:58 UTC | Fri, 31 Dec 2021 10:33:58 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20211231103230-6736 --memory=2200            | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:58 UTC | Fri, 31 Dec 2021 10:34:51 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:52 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:53 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                            | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:53 UTC | Fri, 31 Dec 2021 10:34:54 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:54 UTC | Fri, 31 Dec 2021 10:34:55 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:56 UTC | Fri, 31 Dec 2021 10:34:59 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:59 UTC | Fri, 31 Dec 2021 10:35:00 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736             | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:37:37 UTC | Fri, 31 Dec 2021 10:37:38 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:02 UTC | Fri, 31 Dec 2021 10:39:03 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:05 UTC | Fri, 31 Dec 2021 10:39:06 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:06 UTC | Fri, 31 Dec 2021 10:39:07 UTC |
	|         | old-k8s-version-20211231102602-6736                        |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:07 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736                        |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:28 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736                        |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                            | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:57 UTC | Fri, 31 Dec 2021 10:42:58 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                            | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:59 UTC | Fri, 31 Dec 2021 10:43:00 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:01 UTC | Fri, 31 Dec 2021 10:43:02 UTC |
	|         | embed-certs-20211231102953-6736                            |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:02 UTC | Fri, 31 Dec 2021 10:43:22 UTC |
	|         | embed-certs-20211231102953-6736                            |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:22 UTC | Fri, 31 Dec 2021 10:43:22 UTC |
	|         | embed-certs-20211231102953-6736                            |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:44:18 UTC | Fri, 31 Dec 2021 10:44:19 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736             | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:40 UTC | Fri, 31 Dec 2021 10:45:41 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/31 10:43:22
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.17.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1231 10:43:22.844642  253675 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:43:22.844763  253675 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:43:22.844769  253675 out.go:310] Setting ErrFile to fd 2...
	I1231 10:43:22.844775  253675 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:43:22.844954  253675 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:43:22.845319  253675 out.go:304] Setting JSON to false
	I1231 10:43:22.847068  253675 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5157,"bootTime":1640942245,"procs":267,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:43:22.847193  253675 start.go:122] virtualization: kvm guest
	I1231 10:43:22.850701  253675 out.go:176] * [embed-certs-20211231102953-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:43:22.853129  253675 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:43:22.850948  253675 notify.go:174] Checking for updates...
	I1231 10:43:22.855641  253675 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:43:22.857638  253675 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:43:22.860223  253675 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:43:22.862455  253675 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:43:22.862933  253675 config.go:176] Loaded profile config "embed-certs-20211231102953-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:43:22.863367  253675 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:43:22.907454  253675 docker.go:132] docker version: linux-20.10.12
	I1231 10:43:22.907559  253675 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:43:23.010925  253675 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:43:22.94341606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:43:23.011080  253675 docker.go:237] overlay module found
	I1231 10:43:23.014207  253675 out.go:176] * Using the docker driver based on existing profile
	I1231 10:43:23.014243  253675 start.go:280] selected driver: docker
	I1231 10:43:23.014249  253675 start.go:795] validating driver "docker" against &{Name:embed-certs-20211231102953-6736 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true ku
belet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:43:23.014391  253675 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:43:23.014412  253675 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:43:23.014421  253675 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:43:23.014467  253675 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:43:23.014493  253675 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:43:23.017136  253675 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:43:23.017884  253675 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:43:23.116838  253675 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:43:23.05133577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	W1231 10:43:23.116982  253675 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:43:23.117011  253675 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:43:23.119638  253675 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:43:23.119774  253675 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1231 10:43:23.119804  253675 cni.go:93] Creating CNI manager for ""
	I1231 10:43:23.119812  253675 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:43:23.119829  253675 start_flags.go:298] config:
	{Name:embed-certs-20211231102953-6736 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop
:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:43:23.122399  253675 out.go:176] * Starting control plane node embed-certs-20211231102953-6736 in cluster embed-certs-20211231102953-6736
	I1231 10:43:23.122462  253675 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:43:23.124490  253675 out.go:176] * Pulling base image ...
	I1231 10:43:23.124541  253675 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:43:23.124581  253675 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4
	I1231 10:43:23.124590  253675 cache.go:57] Caching tarball of preloaded images
	I1231 10:43:23.124659  253675 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:43:23.124888  253675 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:43:23.124904  253675 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.1 on containerd
	I1231 10:43:23.125057  253675 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/config.json ...
	I1231 10:43:23.163843  253675 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:43:23.163872  253675 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:43:23.163888  253675 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:43:23.163917  253675 start.go:313] acquiring machines lock for embed-certs-20211231102953-6736: {Name:mk30ade561e73ed15bb546a531be6f54b6b9c072 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:43:23.164009  253675 start.go:317] acquired machines lock for "embed-certs-20211231102953-6736" in 74.119µs
	I1231 10:43:23.164031  253675 start.go:93] Skipping create...Using existing machine configuration
	I1231 10:43:23.164039  253675 fix.go:55] fixHost starting: 
	I1231 10:43:23.164295  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:43:23.199236  253675 fix.go:108] recreateIfNeeded on embed-certs-20211231102953-6736: state=Stopped err=<nil>
	W1231 10:43:23.199269  253675 fix.go:134] unexpected machine state, will restart: <nil>
	I1231 10:43:20.417650  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:22.917443  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:23.202320  253675 out.go:176] * Restarting existing docker container for "embed-certs-20211231102953-6736" ...
	I1231 10:43:23.202389  253675 cli_runner.go:133] Run: docker start embed-certs-20211231102953-6736
	I1231 10:43:23.625205  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:43:23.664982  253675 kic.go:420] container "embed-certs-20211231102953-6736" state is running.
	I1231 10:43:23.665431  253675 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211231102953-6736
	I1231 10:43:23.703812  253675 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/config.json ...
	I1231 10:43:23.704117  253675 machine.go:88] provisioning docker machine ...
	I1231 10:43:23.704144  253675 ubuntu.go:169] provisioning hostname "embed-certs-20211231102953-6736"
	I1231 10:43:23.704223  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:23.742698  253675 main.go:130] libmachine: Using SSH client type: native
	I1231 10:43:23.743011  253675 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I1231 10:43:23.743039  253675 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20211231102953-6736 && echo "embed-certs-20211231102953-6736" | sudo tee /etc/hostname
	I1231 10:43:23.743711  253675 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58602->127.0.0.1:49427: read: connection reset by peer
	I1231 10:43:26.891264  253675 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20211231102953-6736
	
	I1231 10:43:26.891349  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:26.930929  253675 main.go:130] libmachine: Using SSH client type: native
	I1231 10:43:26.931119  253675 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I1231 10:43:26.931150  253675 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20211231102953-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20211231102953-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20211231102953-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:43:27.068707  253675 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:43:27.068740  253675 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:43:27.068790  253675 ubuntu.go:177] setting up certificates
	I1231 10:43:27.068818  253675 provision.go:83] configureAuth start
	I1231 10:43:27.068869  253675 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211231102953-6736
	I1231 10:43:27.106090  253675 provision.go:138] copyHostCerts
	I1231 10:43:27.106158  253675 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:43:27.106172  253675 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:43:27.106233  253675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:43:27.106338  253675 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:43:27.106358  253675 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:43:27.106382  253675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:43:27.106444  253675 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:43:27.106453  253675 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:43:27.106472  253675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:43:27.106526  253675 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20211231102953-6736 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20211231102953-6736]
	I1231 10:43:27.255618  253675 provision.go:172] copyRemoteCerts
	I1231 10:43:27.255688  253675 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:43:27.255719  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:27.293465  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:43:27.393419  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:43:27.414701  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I1231 10:43:27.438482  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1231 10:43:27.461503  253675 provision.go:86] duration metric: configureAuth took 392.669293ms
	I1231 10:43:27.461542  253675 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:43:27.461744  253675 config.go:176] Loaded profile config "embed-certs-20211231102953-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:43:27.461759  253675 machine.go:91] provisioned docker machine in 3.757626792s
	I1231 10:43:27.461767  253675 start.go:267] post-start starting for "embed-certs-20211231102953-6736" (driver="docker")
	I1231 10:43:27.461773  253675 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:43:27.461808  253675 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:43:27.461836  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:27.504760  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:43:27.605497  253675 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:43:27.609459  253675 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:43:27.609488  253675 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:43:27.609499  253675 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:43:27.609505  253675 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:43:27.609516  253675 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:43:27.609580  253675 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:43:27.609669  253675 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:43:27.609751  253675 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:43:27.618322  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:43:27.638480  253675 start.go:270] post-start completed in 176.700691ms
	I1231 10:43:27.638544  253675 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:43:27.638578  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:27.678338  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:43:27.773349  253675 fix.go:57] fixHost completed within 4.609301543s
	I1231 10:43:27.773377  253675 start.go:80] releasing machines lock for "embed-certs-20211231102953-6736", held for 4.609356997s
	I1231 10:43:27.773448  253675 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211231102953-6736
	I1231 10:43:24.917803  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:27.417182  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:27.808989  253675 ssh_runner.go:195] Run: systemctl --version
	I1231 10:43:27.809043  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:27.809080  253675 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:43:27.809149  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:27.849360  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:43:27.849710  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:43:27.941036  253675 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:43:27.971837  253675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:43:27.982942  253675 docker.go:158] disabling docker service ...
	I1231 10:43:27.983000  253675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:43:27.994466  253675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:43:28.005201  253675 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:43:28.084281  253675 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:43:28.165947  253675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:43:28.176963  253675 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:43:28.193845  253675 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:43:28.210061  253675 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:43:28.218904  253675 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:43:28.227395  253675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:43:28.309175  253675 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:43:28.390283  253675 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:43:28.390355  253675 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:43:28.396380  253675 start.go:458] Will wait 60s for crictl version
	I1231 10:43:28.396511  253675 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:43:28.426104  253675 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:43:28Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:43:29.418833  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:31.917246  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:33.918423  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:36.418075  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:39.474533  253675 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:43:39.501276  253675 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:43:39.501336  253675 ssh_runner.go:195] Run: containerd --version
	I1231 10:43:39.527002  253675 ssh_runner.go:195] Run: containerd --version
	I1231 10:43:39.551133  253675 out.go:176] * Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	I1231 10:43:39.551225  253675 cli_runner.go:133] Run: docker network inspect embed-certs-20211231102953-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:43:39.587623  253675 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1231 10:43:39.591336  253675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:43:39.604414  253675 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:43:39.606523  253675 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:43:39.608679  253675 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:43:39.608778  253675 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:43:39.608844  253675 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:43:39.634556  253675 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:43:39.634585  253675 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:43:39.634630  253675 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:43:39.662182  253675 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:43:39.662208  253675 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:43:39.662251  253675 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:43:39.687863  253675 cni.go:93] Creating CNI manager for ""
	I1231 10:43:39.687887  253675 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:43:39.687902  253675 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:43:39.687916  253675 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20211231102953-6736 NodeName:embed-certs-20211231102953-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:43:39.688044  253675 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20211231102953-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:43:39.688123  253675 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=embed-certs-20211231102953-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1231 10:43:39.688169  253675 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.1
	I1231 10:43:39.696210  253675 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:43:39.696312  253675 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:43:39.704267  253675 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (638 bytes)
	I1231 10:43:39.718589  253675 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1231 10:43:39.734360  253675 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2081 bytes)
	I1231 10:43:39.749132  253675 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:43:39.753026  253675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:43:39.764036  253675 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736 for IP: 192.168.58.2
	I1231 10:43:39.764162  253675 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:43:39.764206  253675 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:43:39.764332  253675 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/client.key
	I1231 10:43:39.764393  253675 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.key.cee25041
	I1231 10:43:39.764430  253675 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.key
	I1231 10:43:39.764535  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:43:39.764569  253675 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:43:39.764576  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:43:39.764600  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:43:39.764619  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:43:39.764640  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:43:39.764679  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:43:39.765624  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:43:39.786589  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1231 10:43:39.806214  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:43:39.827437  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1231 10:43:39.847223  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:43:39.869717  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:43:39.892296  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:43:39.915269  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:43:39.940596  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:43:39.965015  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:43:39.987472  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:43:40.008065  253675 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:43:40.023700  253675 ssh_runner.go:195] Run: openssl version
	I1231 10:43:40.029648  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:43:40.038817  253675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:43:40.042994  253675 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:43:40.043064  253675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:43:40.049114  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:43:40.057598  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:43:40.067157  253675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:43:40.071141  253675 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:43:40.071208  253675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:43:40.077176  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:43:40.085041  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:43:40.093428  253675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:43:40.097387  253675 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:43:40.097447  253675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:43:40.102969  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:43:40.110890  253675 kubeadm.go:388] StartCluster: {Name:embed-certs-20211231102953-6736 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_read
y:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:43:40.110993  253675 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:43:40.111061  253675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:43:40.137789  253675 cri.go:87] found id: "c4090927d59b8d0231d9972079e3b14697c8f3127d96ddaed42ac933ada12239"
	I1231 10:43:40.137839  253675 cri.go:87] found id: "03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f"
	I1231 10:43:40.137847  253675 cri.go:87] found id: "a380c0d98153cc2ab6095cc8b6eaf0192543775721df6c9cb6c5e4e510d1636d"
	I1231 10:43:40.137854  253675 cri.go:87] found id: "7de9215e7da17128bb40a41f2f520ecf92f3eb1e98a7c3510a228471a97c4f2f"
	I1231 10:43:40.137861  253675 cri.go:87] found id: "bd3c847642a9fe6d581ed824b26b0b73d8344e03d63122f560cf62e61a262cb3"
	I1231 10:43:40.137868  253675 cri.go:87] found id: "4d064efc3679beff93c0b83a48f6fdc82cb98e282e856beb780502ca855801a3"
	I1231 10:43:40.137875  253675 cri.go:87] found id: "eb97b3087125dceba3df5197317e35791b4ca2f794effdfa4d5118543d9d3072"
	I1231 10:43:40.137883  253675 cri.go:87] found id: ""
	I1231 10:43:40.137935  253675 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1231 10:43:40.152791  253675 cri.go:114] JSON = null
	W1231 10:43:40.152840  253675 kubeadm.go:395] unpause failed: list paused: list returned 0 containers, but ps returned 7
	I1231 10:43:40.152904  253675 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:43:40.161320  253675 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1231 10:43:40.168524  253675 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:40.169402  253675 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20211231102953-6736" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:43:40.169777  253675 kubeconfig.go:127] "embed-certs-20211231102953-6736" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig - will repair!
	I1231 10:43:40.170362  253675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:43:40.172686  253675 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1231 10:43:40.180820  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:40.180878  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:40.195345  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:40.395812  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:40.395894  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:40.412496  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:40.595573  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:40.595660  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:40.610754  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:40.795993  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:40.796074  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:40.812031  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:40.996111  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:40.996181  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:41.011422  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:41.195687  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:41.195777  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:41.211094  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:41.396304  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:41.396402  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:41.413206  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:41.595459  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:41.595552  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:41.611792  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:41.796061  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:41.796162  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:41.811694  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:41.995913  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:41.995991  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:42.013353  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:42.195522  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:42.195645  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:42.212206  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:42.396496  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:42.396584  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:42.414476  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:42.595660  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:42.595748  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:42.611643  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:38.917717  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:41.417106  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:42.796314  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:42.797268  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:42.814486  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:42.995563  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:42.995659  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:43.011246  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:43.195463  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:43.195552  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:43.211623  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:43.211657  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:43.211698  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:43.227133  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W1231 10:43:43.227160  253675 kubeadm.go:600] needs reconfigure: apiserver error: timed out waiting for the condition
	I1231 10:43:43.227194  253675 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1231 10:43:43.962869  253675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:43:43.975076  253675 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:43:43.984301  253675 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:43:43.984358  253675 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:43:43.992612  253675 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:43:43.992653  253675 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:43:44.298681  253675 out.go:203]   - Generating certificates and keys ...
	I1231 10:43:45.481520  253675 out.go:203]   - Booting up control plane ...
	I1231 10:43:43.418275  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:45.419818  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:47.917950  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:50.417701  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:52.418425  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:54.917711  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:57.416994  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:58.029054  253675 out.go:203]   - Configuring RBAC rules ...
	I1231 10:43:58.483326  253675 cni.go:93] Creating CNI manager for ""
	I1231 10:43:58.483357  253675 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:43:58.488726  253675 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:43:58.488827  253675 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:43:58.493521  253675 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.1/kubectl ...
	I1231 10:43:58.493558  253675 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:43:58.512259  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:43:59.192508  253675 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:43:59.192665  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:43:59.192694  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=embed-certs-20211231102953-6736 minikube.k8s.io/updated_at=2021_12_31T10_43_59_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:43:59.213675  253675 ops.go:34] apiserver oom_adj: -16
	I1231 10:43:59.322906  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:43:59.895946  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:00.395692  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:00.895655  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:01.395632  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:01.896412  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:02.395407  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:43:59.418293  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:01.918364  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:02.896452  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:03.396392  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:03.895366  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:04.396336  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:04.895859  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:05.395565  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:05.895587  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:06.395343  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:06.895271  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:07.395321  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:03.922878  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:06.417497  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:07.896038  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:08.396421  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:08.895347  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:09.395521  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:09.895830  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:10.395802  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:10.501742  253675 kubeadm.go:864] duration metric: took 11.30912297s to wait for elevateKubeSystemPrivileges.
	I1231 10:44:10.501806  253675 kubeadm.go:390] StartCluster complete in 30.390935465s
	I1231 10:44:10.501833  253675 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:44:10.501996  253675 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:44:10.504119  253675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:44:11.026995  253675 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20211231102953-6736" rescaled to 1
	I1231 10:44:11.027076  253675 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:44:11.030066  253675 out.go:176] * Verifying Kubernetes components...
	I1231 10:44:11.027250  253675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:44:11.027529  253675 config.go:176] Loaded profile config "embed-certs-20211231102953-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:44:11.027546  253675 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I1231 10:44:11.030253  253675 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20211231102953-6736"
	I1231 10:44:11.030274  253675 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20211231102953-6736"
	W1231 10:44:11.030298  253675 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:44:11.030347  253675 host.go:66] Checking if "embed-certs-20211231102953-6736" exists ...
	I1231 10:44:11.030509  253675 addons.go:65] Setting dashboard=true in profile "embed-certs-20211231102953-6736"
	I1231 10:44:11.030533  253675 addons.go:153] Setting addon dashboard=true in "embed-certs-20211231102953-6736"
	I1231 10:44:11.030537  253675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W1231 10:44:11.030543  253675 addons.go:165] addon dashboard should already be in state true
	I1231 10:44:11.030572  253675 host.go:66] Checking if "embed-certs-20211231102953-6736" exists ...
	I1231 10:44:11.030638  253675 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20211231102953-6736"
	I1231 10:44:11.030651  253675 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20211231102953-6736"
	I1231 10:44:11.030970  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:44:11.031137  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:44:11.031253  253675 addons.go:65] Setting metrics-server=true in profile "embed-certs-20211231102953-6736"
	I1231 10:44:11.031280  253675 addons.go:153] Setting addon metrics-server=true in "embed-certs-20211231102953-6736"
	W1231 10:44:11.031288  253675 addons.go:165] addon metrics-server should already be in state true
	I1231 10:44:11.031157  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:44:11.031313  253675 host.go:66] Checking if "embed-certs-20211231102953-6736" exists ...
	I1231 10:44:11.031695  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:44:11.094017  253675 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1231 10:44:11.101006  253675 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1231 10:44:11.100849  253675 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20211231102953-6736" to be "Ready" ...
	I1231 10:44:11.102159  253675 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1231 10:44:11.112212  253675 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:44:11.109078  253675 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20211231102953-6736"
	I1231 10:44:11.109381  253675 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1231 10:44:11.109540  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	W1231 10:44:11.112312  253675 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:44:11.112367  253675 host.go:66] Checking if "embed-certs-20211231102953-6736" exists ...
	I1231 10:44:11.112422  253675 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:44:11.112434  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:44:11.112456  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1231 10:44:11.112471  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1231 10:44:11.112489  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:44:11.112497  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:44:11.112389  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:44:11.112946  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:44:11.168999  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:44:11.169017  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:44:11.169333  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:44:11.171810  253675 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:44:11.171837  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:44:11.171897  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:44:11.220524  253675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:44:11.225393  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:44:11.379784  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1231 10:44:11.379826  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1231 10:44:11.383351  253675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:44:11.479751  253675 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1231 10:44:11.479789  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1231 10:44:11.481212  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1231 10:44:11.481233  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1231 10:44:11.581046  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1231 10:44:11.581120  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1231 10:44:11.582256  253675 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1231 10:44:11.582344  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1231 10:44:11.587594  253675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:44:11.679970  253675 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:44:11.680004  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1231 10:44:11.682163  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1231 10:44:11.682192  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1231 10:44:11.791172  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1231 10:44:11.791211  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1231 10:44:11.791675  253675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:44:11.895732  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1231 10:44:11.895775  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1231 10:44:11.995782  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1231 10:44:11.995814  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1231 10:44:12.085513  253675 start.go:773] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I1231 10:44:12.099705  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1231 10:44:12.099792  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1231 10:44:12.194606  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:44:12.194725  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1231 10:44:12.297993  253675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:44:12.500547  253675 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.117122608s)
	I1231 10:44:08.419100  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:10.918011  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:12.919429  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:12.995685  253675 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.203961788s)
	I1231 10:44:12.995735  253675 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20211231102953-6736"
	I1231 10:44:13.179949  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:14.102814  253675 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.804765707s)
	I1231 10:44:14.106088  253675 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1231 10:44:14.106137  253675 addons.go:417] enableAddons completed in 3.078602112s
	I1231 10:44:15.629711  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:17.630121  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:15.417126  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:17.917105  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:17.920187  248388 node_ready.go:38] duration metric: took 4m0.011176212s waiting for node "old-k8s-version-20211231102602-6736" to be "Ready" ...
	I1231 10:44:17.924079  248388 out.go:176] 
	W1231 10:44:17.924288  248388 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:44:17.924318  248388 out.go:241] * 
	W1231 10:44:17.925165  248388 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1231 10:44:20.129390  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:22.129653  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:24.629739  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:27.129548  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:29.129623  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:31.130167  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:33.629367  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:35.630512  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:38.129072  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:40.129862  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:42.628942  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:44.630077  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:47.129721  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:49.629888  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:52.129324  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:54.129718  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:56.629788  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:59.129021  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:01.129651  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:03.629842  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:05.629877  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:08.128850  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:10.129558  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:12.629587  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:14.629796  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:17.129902  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:19.629779  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:22.129932  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:24.630806  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:27.130380  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:29.629652  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:31.629743  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:34.129422  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:36.629867  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:39.129103  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:41.129601  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	2831ff3abf5d3       6de166512aa22       5 minutes ago       Exited              kindnet-cni               6                   2de2afafee004
	bd73d75d2e911       b46c42588d511       12 minutes ago      Running             kube-proxy                0                   92acbaf1e0f9d
	6f1fab877ff5d       b6d7abedde399       12 minutes ago      Running             kube-apiserver            0                   4bf0c162f76ea
	631b3be24dd2b       25f8c7f3da61c       12 minutes ago      Running             etcd                      0                   8150f672d9df2
	a578cbf12a8e4       f51846a4fd288       12 minutes ago      Running             kube-controller-manager   0                   8046160d707ed
	78f1ab230e901       71d575efe6283       12 minutes ago      Running             kube-scheduler            0                   7a99114bf9d50
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-12-31 10:32:51 UTC, end at Fri 2021-12-31 10:45:43 UTC. --
	Dec 31 10:35:59 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:35:59.232114334Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:35:59 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:35:59.955849355Z" level=info msg="RemoveContainer for \"d09a1a150a26b2a1fafcaf047e419fe2b681dcee1016ffc3393116bb6a72be46\""
	Dec 31 10:35:59 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:35:59.962860726Z" level=info msg="RemoveContainer for \"d09a1a150a26b2a1fafcaf047e419fe2b681dcee1016ffc3393116bb6a72be46\" returns successfully"
	Dec 31 10:37:30 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:30.625457193Z" level=info msg="CreateContainer within sandbox \"2de2afafee0046e35af925d4b9ad9d07f1c71a4591a74f81ec7b10dd8bd4a633\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:5,}"
	Dec 31 10:37:30 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:30.653652653Z" level=info msg="CreateContainer within sandbox \"2de2afafee0046e35af925d4b9ad9d07f1c71a4591a74f81ec7b10dd8bd4a633\" for &ContainerMetadata{Name:kindnet-cni,Attempt:5,} returns container id \"82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523\""
	Dec 31 10:37:30 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:30.654422181Z" level=info msg="StartContainer for \"82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523\""
	Dec 31 10:37:30 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:30.903113456Z" level=info msg="StartContainer for \"82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523\" returns successfully"
	Dec 31 10:37:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:41.196090794Z" level=info msg="Finish piping stdout of container \"82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523\""
	Dec 31 10:37:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:41.196091034Z" level=info msg="Finish piping stderr of container \"82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523\""
	Dec 31 10:37:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:41.196929815Z" level=info msg="TaskExit event &TaskExit{ContainerID:82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523,ID:82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523,Pid:2291,ExitStatus:2,ExitedAt:2021-12-31 10:37:41.196610758 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:37:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:41.234636261Z" level=info msg="shim disconnected" id=82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523
	Dec 31 10:37:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:41.234754150Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:37:42 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:42.146475323Z" level=info msg="RemoveContainer for \"63b83e6e86d6f505da0689a06531b98663987ffbd9fbadda5ee9e51e682092e1\""
	Dec 31 10:37:42 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:37:42.153143179Z" level=info msg="RemoveContainer for \"63b83e6e86d6f505da0689a06531b98663987ffbd9fbadda5ee9e51e682092e1\" returns successfully"
	Dec 31 10:40:30 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:40:30.625286175Z" level=info msg="CreateContainer within sandbox \"2de2afafee0046e35af925d4b9ad9d07f1c71a4591a74f81ec7b10dd8bd4a633\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:6,}"
	Dec 31 10:40:30 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:40:30.653191231Z" level=info msg="CreateContainer within sandbox \"2de2afafee0046e35af925d4b9ad9d07f1c71a4591a74f81ec7b10dd8bd4a633\" for &ContainerMetadata{Name:kindnet-cni,Attempt:6,} returns container id \"2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29\""
	Dec 31 10:40:30 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:40:30.653987597Z" level=info msg="StartContainer for \"2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29\""
	Dec 31 10:40:30 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:40:30.888030955Z" level=info msg="StartContainer for \"2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29\" returns successfully"
	Dec 31 10:40:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:40:41.186125929Z" level=info msg="Finish piping stderr of container \"2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29\""
	Dec 31 10:40:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:40:41.186157192Z" level=info msg="Finish piping stdout of container \"2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29\""
	Dec 31 10:40:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:40:41.187028085Z" level=info msg="TaskExit event &TaskExit{ContainerID:2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29,ID:2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29,Pid:2611,ExitStatus:2,ExitedAt:2021-12-31 10:40:41.186724899 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:40:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:40:41.215211769Z" level=info msg="shim disconnected" id=2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29
	Dec 31 10:40:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:40:41.215300009Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:40:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:40:41.497569419Z" level=info msg="RemoveContainer for \"82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523\""
	Dec 31 10:40:41 default-k8s-different-port-20211231103230-6736 containerd[462]: time="2021-12-31T10:40:41.503450657Z" level=info msg="RemoveContainer for \"82faf00c4081423f1780262242d65c33921a04e0af26157d6411a0e9b5e61523\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20211231103230-6736
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20211231103230-6736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb
	                    minikube.k8s.io/name=default-k8s-different-port-20211231103230-6736
	                    minikube.k8s.io/updated_at=2021_12_31T10_33_24_0700
	                    minikube.k8s.io/version=v1.24.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 Dec 2021 10:33:20 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20211231103230-6736
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 Dec 2021 10:45:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 Dec 2021 10:43:42 +0000   Fri, 31 Dec 2021 10:33:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 Dec 2021 10:43:42 +0000   Fri, 31 Dec 2021 10:33:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 Dec 2021 10:43:42 +0000   Fri, 31 Dec 2021 10:33:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 31 Dec 2021 10:43:42 +0000   Fri, 31 Dec 2021 10:33:16 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    default-k8s-different-port-20211231103230-6736
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	  System UUID:                60ec9bed-9ff2-4db1-b438-2738c19f5f1f
	  Boot ID:                    e6f4bca3-a61e-4e63-a68d-0c73a5c2c999
	  Kernel Version:             5.11.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.12
	  Kubelet Version:            v1.23.1
	  Kube-Proxy Version:         v1.23.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20211231103230-6736                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-rgq8t                                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-default-k8s-different-port-20211231103230-6736             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-default-k8s-different-port-20211231103230-6736    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-z25nr                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-default-k8s-different-port-20211231103230-6736             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 12m                kube-proxy  
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x3 over 12m)  kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 da bc c6 d3 d8 08 06
	[  +0.905080] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vetha337dd59
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 d5 c5 d8 9b b7 08 06
	[  +0.767220] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethad0df3e8
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 45 74 46 b2 ed 08 06
	[  +0.175235] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethfa30cc3c
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 f5 8d 27 39 52 08 06
	[  +2.267937] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.022406] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.024047] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.951751] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.011878] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023951] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.963808] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.003832] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023918] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	
	* 
	* ==> etcd [631b3be24dd2b7667fa37162bef86df2c43e5cf7b5aac17096ee8f79bf749549] <==
	* {"level":"info","ts":"2021-12-31T10:33:16.514Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-12-31T10:33:16.514Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2021-12-31T10:33:16.514Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-12-31T10:33:16.514Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2021-12-31T10:33:16.514Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-12-31T10:33:17.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2021-12-31T10:33:17.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2021-12-31T10:33:17.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2021-12-31T10:33:17.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2021-12-31T10:33:17.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2021-12-31T10:33:17.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2021-12-31T10:33:17.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2021-12-31T10:33:17.139Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:33:17.139Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:33:17.139Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:default-k8s-different-port-20211231103230-6736 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2021-12-31T10:33:17.139Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:33:17.139Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-12-31T10:33:17.139Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-12-31T10:33:17.140Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-12-31T10:33:17.141Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:33:17.141Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:33:17.141Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:33:17.179Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2021-12-31T10:43:17.779Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":588}
	{"level":"info","ts":"2021-12-31T10:43:17.780Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":588,"took":"595.668µs"}
	
	* 
	* ==> kernel <==
	*  10:45:43 up  1:28,  0 users,  load average: 1.03, 1.08, 1.72
	Linux default-k8s-different-port-20211231103230-6736 5.11.0-1023-gcp #25~20.04.1-Ubuntu SMP Mon Nov 15 15:54:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [6f1fab877ff5d05fc062729318a8cb0acd49484779378aa54df1414cc54b604e] <==
	* I1231 10:33:20.079343       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I1231 10:33:20.080322       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I1231 10:33:20.080422       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1231 10:33:20.080482       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1231 10:33:20.080553       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1231 10:33:20.080593       1 cache.go:39] Caches are synced for autoregister controller
	I1231 10:33:20.920577       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1231 10:33:20.920611       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1231 10:33:20.941744       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I1231 10:33:20.945455       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I1231 10:33:20.945476       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I1231 10:33:21.601484       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1231 10:33:21.639873       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1231 10:33:21.720429       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1231 10:33:21.727635       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I1231 10:33:21.728837       1 controller.go:611] quota admission added evaluator for: endpoints
	I1231 10:33:21.735242       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1231 10:33:22.098268       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1231 10:33:23.373002       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1231 10:33:23.385196       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1231 10:33:23.401377       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1231 10:33:28.493494       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1231 10:33:35.559469       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1231 10:33:35.607401       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1231 10:33:37.326422       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [a578cbf12a8e43444eec092845ea14c9a2f682965d287a9801a49dd0a7a8f11f] <==
	* I1231 10:33:34.946962       1 shared_informer.go:247] Caches are synced for GC 
	I1231 10:33:34.947085       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I1231 10:33:34.946811       1 shared_informer.go:247] Caches are synced for cronjob 
	I1231 10:33:34.947601       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I1231 10:33:34.947738       1 shared_informer.go:247] Caches are synced for persistent volume 
	I1231 10:33:34.947910       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I1231 10:33:34.950660       1 shared_informer.go:247] Caches are synced for job 
	I1231 10:33:34.954003       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I1231 10:33:34.956287       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I1231 10:33:34.956482       1 shared_informer.go:247] Caches are synced for PVC protection 
	I1231 10:33:35.121007       1 shared_informer.go:247] Caches are synced for disruption 
	I1231 10:33:35.121046       1 disruption.go:371] Sending events to api server.
	I1231 10:33:35.157911       1 shared_informer.go:247] Caches are synced for resource quota 
	I1231 10:33:35.163115       1 shared_informer.go:247] Caches are synced for resource quota 
	I1231 10:33:35.205005       1 shared_informer.go:247] Caches are synced for stateful set 
	I1231 10:33:35.566040       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I1231 10:33:35.591164       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1231 10:33:35.597663       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1231 10:33:35.597700       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1231 10:33:35.621312       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-z25nr"
	I1231 10:33:35.624088       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rgq8t"
	I1231 10:33:35.860532       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-5t6ck"
	I1231 10:33:35.890918       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-hkl6w"
	I1231 10:33:35.942712       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I1231 10:33:35.986134       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-5t6ck"
	
	* 
	* ==> kube-proxy [bd73d75d2e911560016e85d3108966fc37ee8b4b3657740aad59a2ef691ec869] <==
	* I1231 10:33:37.213196       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I1231 10:33:37.213313       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I1231 10:33:37.213369       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1231 10:33:37.321972       1 server_others.go:206] "Using iptables Proxier"
	I1231 10:33:37.322042       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1231 10:33:37.322055       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1231 10:33:37.322072       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1231 10:33:37.322557       1 server.go:656] "Version info" version="v1.23.1"
	I1231 10:33:37.323369       1 config.go:317] "Starting service config controller"
	I1231 10:33:37.323386       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1231 10:33:37.323454       1 config.go:226] "Starting endpoint slice config controller"
	I1231 10:33:37.323461       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1231 10:33:37.424281       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1231 10:33:37.424342       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [78f1ab230e90175601afd355c7e642dc2f7730a2c82d7591eb54c79c0cd8fdcb] <==
	* W1231 10:33:20.086374       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:33:20.086415       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1231 10:33:20.086485       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:33:20.086591       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1231 10:33:20.087225       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:33:20.087307       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1231 10:33:20.933451       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1231 10:33:20.933497       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1231 10:33:20.977195       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:33:20.977245       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1231 10:33:21.045273       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1231 10:33:21.045313       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1231 10:33:21.102118       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:33:21.102162       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1231 10:33:21.139854       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:33:21.139888       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1231 10:33:21.193715       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1231 10:33:21.193755       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1231 10:33:21.315680       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1231 10:33:21.315727       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1231 10:33:21.315890       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:33:21.315957       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1231 10:33:21.399055       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:33:21.399096       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1231 10:33:21.711242       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-12-31 10:32:51 UTC, end at Fri 2021-12-31 10:45:43 UTC. --
	Dec 31 10:44:33 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:44:33.623007    1282 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-rgq8t_kube-system(9e13c1c9-b40e-4ddc-8914-be909a764fa4)\"" pod="kube-system/kindnet-rgq8t" podUID=9e13c1c9-b40e-4ddc-8914-be909a764fa4
	Dec 31 10:44:33 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:44:33.889754    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:44:38 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:44:38.890510    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:44:43 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:44:43.891660    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:44:48 default-k8s-different-port-20211231103230-6736 kubelet[1282]: I1231 10:44:48.622983    1282 scope.go:110] "RemoveContainer" containerID="2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29"
	Dec 31 10:44:48 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:44:48.623487    1282 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-rgq8t_kube-system(9e13c1c9-b40e-4ddc-8914-be909a764fa4)\"" pod="kube-system/kindnet-rgq8t" podUID=9e13c1c9-b40e-4ddc-8914-be909a764fa4
	Dec 31 10:44:48 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:44:48.893104    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:44:53 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:44:53.894453    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:44:58 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:44:58.895595    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:45:01 default-k8s-different-port-20211231103230-6736 kubelet[1282]: I1231 10:45:01.623342    1282 scope.go:110] "RemoveContainer" containerID="2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29"
	Dec 31 10:45:01 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:01.623696    1282 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-rgq8t_kube-system(9e13c1c9-b40e-4ddc-8914-be909a764fa4)\"" pod="kube-system/kindnet-rgq8t" podUID=9e13c1c9-b40e-4ddc-8914-be909a764fa4
	Dec 31 10:45:03 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:03.897107    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:45:08 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:08.898745    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:45:13 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:13.899729    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:45:14 default-k8s-different-port-20211231103230-6736 kubelet[1282]: I1231 10:45:14.623024    1282 scope.go:110] "RemoveContainer" containerID="2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29"
	Dec 31 10:45:14 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:14.623462    1282 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-rgq8t_kube-system(9e13c1c9-b40e-4ddc-8914-be909a764fa4)\"" pod="kube-system/kindnet-rgq8t" podUID=9e13c1c9-b40e-4ddc-8914-be909a764fa4
	Dec 31 10:45:18 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:18.900961    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:45:23 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:23.902829    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:45:26 default-k8s-different-port-20211231103230-6736 kubelet[1282]: I1231 10:45:26.622399    1282 scope.go:110] "RemoveContainer" containerID="2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29"
	Dec 31 10:45:26 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:26.622837    1282 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-rgq8t_kube-system(9e13c1c9-b40e-4ddc-8914-be909a764fa4)\"" pod="kube-system/kindnet-rgq8t" podUID=9e13c1c9-b40e-4ddc-8914-be909a764fa4
	Dec 31 10:45:28 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:28.903591    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:45:33 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:33.905172    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:45:37 default-k8s-different-port-20211231103230-6736 kubelet[1282]: I1231 10:45:37.622799    1282 scope.go:110] "RemoveContainer" containerID="2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29"
	Dec 31 10:45:37 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:37.623097    1282 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-rgq8t_kube-system(9e13c1c9-b40e-4ddc-8914-be909a764fa4)\"" pod="kube-system/kindnet-rgq8t" podUID=9e13c1c9-b40e-4ddc-8914-be909a764fa4
	Dec 31 10:45:38 default-k8s-different-port-20211231103230-6736 kubelet[1282]: E1231 10:45:38.906480    1282 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20211231103230-6736 -n default-k8s-different-port-20211231103230-6736
helpers_test.go:262: (dbg) Run:  kubectl --context default-k8s-different-port-20211231103230-6736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: busybox coredns-64897985d-hkl6w storage-provisioner
helpers_test.go:273: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20211231103230-6736 describe pod busybox coredns-64897985d-hkl6w storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20211231103230-6736 describe pod busybox coredns-64897985d-hkl6w storage-provisioner: exit status 1 (63.47446ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t4q6g (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-t4q6g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  50s (x8 over 8m5s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-hkl6w" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context default-k8s-different-port-20211231103230-6736 describe pod busybox coredns-64897985d-hkl6w storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/DeployApp (485.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (292.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20211231102602-6736 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E1231 10:39:41.557240    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
E1231 10:40:10.310060    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
E1231 10:40:22.104996    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory
E1231 10:40:36.756052    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory
E1231 10:40:43.946169    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory
E1231 10:41:11.631153    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory
E1231 10:41:13.450756    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory
E1231 10:41:41.135420    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory
E1231 10:41:45.150062    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory
E1231 10:41:59.313746    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory
E1231 10:41:59.801237    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory
E1231 10:42:39.269737    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-20211231102602-6736 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: exit status 80 (4m49.696746533s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20211231102602-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Kubernetes 1.23.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.1
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node old-k8s-version-20211231102602-6736 in cluster old-k8s-version-20211231102602-6736
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-20211231102602-6736" ...
	* Preparing Kubernetes v1.16.0 on containerd 1.4.12 ...
	  - kubelet.global-housekeeping-interval=60m
	  - kubelet.housekeeping-interval=5m
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image kubernetesui/dashboard:v2.3.1
	  - Using image k8s.gcr.io/echoserver:1.4
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1231 10:39:28.297525  248388 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:39:28.297636  248388 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:39:28.297643  248388 out.go:310] Setting ErrFile to fd 2...
	I1231 10:39:28.297648  248388 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:39:28.297773  248388 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:39:28.298053  248388 out.go:304] Setting JSON to false
	I1231 10:39:28.299599  248388 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4923,"bootTime":1640942245,"procs":400,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:39:28.299706  248388 start.go:122] virtualization: kvm guest
	I1231 10:39:28.303369  248388 out.go:176] * [old-k8s-version-20211231102602-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:39:28.306213  248388 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:39:28.303616  248388 notify.go:174] Checking for updates...
	I1231 10:39:28.308369  248388 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:39:28.310826  248388 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:39:28.313682  248388 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:39:28.316049  248388 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:39:28.316742  248388 config.go:176] Loaded profile config "old-k8s-version-20211231102602-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I1231 10:39:28.320866  248388 out.go:176] * Kubernetes 1.23.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.1
	I1231 10:39:28.320932  248388 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:39:28.372088  248388 docker.go:132] docker version: linux-20.10.12
	I1231 10:39:28.372203  248388 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:39:28.488905  248388 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:39:28.412122441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:39:28.489039  248388 docker.go:237] overlay module found
	I1231 10:39:28.491975  248388 out.go:176] * Using the docker driver based on existing profile
	I1231 10:39:28.492011  248388 start.go:280] selected driver: docker
	I1231 10:39:28.492018  248388 start.go:795] validating driver "docker" against &{Name:old-k8s-version-20211231102602-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20211231102602-6736 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra
:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:39:28.492170  248388 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:39:28.492189  248388 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:39:28.492201  248388 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:39:28.492264  248388 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:39:28.492289  248388 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I1231 10:39:28.494431  248388 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:39:28.495047  248388 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:39:28.596528  248388 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:39:28.528773732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	W1231 10:39:28.596686  248388 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:39:28.596711  248388 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I1231 10:39:28.599384  248388 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:39:28.599534  248388 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1231 10:39:28.599569  248388 cni.go:93] Creating CNI manager for ""
	I1231 10:39:28.599580  248388 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:39:28.599601  248388 start_flags.go:298] config:
	{Name:old-k8s-version-20211231102602-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20211231102602-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Sched
uledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:39:28.602165  248388 out.go:176] * Starting control plane node old-k8s-version-20211231102602-6736 in cluster old-k8s-version-20211231102602-6736
	I1231 10:39:28.602214  248388 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:39:28.604479  248388 out.go:176] * Pulling base image ...
	I1231 10:39:28.604519  248388 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1231 10:39:28.604576  248388 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I1231 10:39:28.604590  248388 cache.go:57] Caching tarball of preloaded images
	I1231 10:39:28.604620  248388 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:39:28.604864  248388 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:39:28.604881  248388 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I1231 10:39:28.605028  248388 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/config.json ...
	I1231 10:39:28.648185  248388 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:39:28.648212  248388 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:39:28.648221  248388 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:39:28.648290  248388 start.go:313] acquiring machines lock for old-k8s-version-20211231102602-6736: {Name:mk363b8d877fe23a69d731c391a1b6f4ce841b33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:39:28.648398  248388 start.go:317] acquired machines lock for "old-k8s-version-20211231102602-6736" in 81.793µs
	I1231 10:39:28.648427  248388 start.go:93] Skipping create...Using existing machine configuration
	I1231 10:39:28.648436  248388 fix.go:55] fixHost starting: 
	I1231 10:39:28.648678  248388 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:39:28.687124  248388 fix.go:108] recreateIfNeeded on old-k8s-version-20211231102602-6736: state=Stopped err=<nil>
	W1231 10:39:28.687173  248388 fix.go:134] unexpected machine state, will restart: <nil>
	I1231 10:39:28.691855  248388 out.go:176] * Restarting existing docker container for "old-k8s-version-20211231102602-6736" ...
	I1231 10:39:28.691970  248388 cli_runner.go:133] Run: docker start old-k8s-version-20211231102602-6736
	I1231 10:39:29.129996  248388 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:39:29.173538  248388 kic.go:420] container "old-k8s-version-20211231102602-6736" state is running.
	I1231 10:39:29.174075  248388 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20211231102602-6736
	I1231 10:39:29.219092  248388 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/config.json ...
	I1231 10:39:29.219314  248388 machine.go:88] provisioning docker machine ...
	I1231 10:39:29.219347  248388 ubuntu.go:169] provisioning hostname "old-k8s-version-20211231102602-6736"
	I1231 10:39:29.219382  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:39:29.259417  248388 main.go:130] libmachine: Using SSH client type: native
	I1231 10:39:29.259602  248388 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49422 <nil> <nil>}
	I1231 10:39:29.259620  248388 main.go:130] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20211231102602-6736 && echo "old-k8s-version-20211231102602-6736" | sudo tee /etc/hostname
	I1231 10:39:29.260468  248388 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54766->127.0.0.1:49422: read: connection reset by peer
	I1231 10:39:32.408132  248388 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20211231102602-6736
	
	I1231 10:39:32.408224  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:39:32.452034  248388 main.go:130] libmachine: Using SSH client type: native
	I1231 10:39:32.452295  248388 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49422 <nil> <nil>}
	I1231 10:39:32.452329  248388 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20211231102602-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20211231102602-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20211231102602-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:39:32.592974  248388 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:39:32.593020  248388 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:39:32.593043  248388 ubuntu.go:177] setting up certificates
	I1231 10:39:32.593054  248388 provision.go:83] configureAuth start
	I1231 10:39:32.593097  248388 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20211231102602-6736
	I1231 10:39:32.631818  248388 provision.go:138] copyHostCerts
	I1231 10:39:32.631883  248388 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:39:32.631890  248388 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:39:32.631953  248388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:39:32.632060  248388 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:39:32.632071  248388 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:39:32.632110  248388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:39:32.632180  248388 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:39:32.632189  248388 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:39:32.632208  248388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:39:32.632302  248388 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20211231102602-6736 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20211231102602-6736]
	I1231 10:39:33.171522  248388 provision.go:172] copyRemoteCerts
	I1231 10:39:33.171593  248388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:39:33.171626  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:39:33.215114  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:39:33.313197  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:39:33.336105  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I1231 10:39:33.356635  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1231 10:39:33.379375  248388 provision.go:86] duration metric: configureAuth took 786.294314ms
	I1231 10:39:33.379494  248388 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:39:33.379778  248388 config.go:176] Loaded profile config "old-k8s-version-20211231102602-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I1231 10:39:33.379801  248388 machine.go:91] provisioned docker machine in 4.160462173s
	I1231 10:39:33.379812  248388 start.go:267] post-start starting for "old-k8s-version-20211231102602-6736" (driver="docker")
	I1231 10:39:33.379817  248388 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:39:33.379857  248388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:39:33.379894  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:39:33.419775  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:39:33.517404  248388 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:39:33.521457  248388 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:39:33.521489  248388 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:39:33.521498  248388 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:39:33.521503  248388 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:39:33.521516  248388 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:39:33.521566  248388 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:39:33.521628  248388 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:39:33.521709  248388 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:39:33.529871  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:39:33.549872  248388 start.go:270] post-start completed in 170.044596ms
	I1231 10:39:33.549940  248388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:39:33.549978  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:39:33.589440  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:39:33.685438  248388 fix.go:57] fixHost completed within 5.036996865s
	I1231 10:39:33.685483  248388 start.go:80] releasing machines lock for "old-k8s-version-20211231102602-6736", held for 5.037064541s
	I1231 10:39:33.685596  248388 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20211231102602-6736
	I1231 10:39:33.725054  248388 ssh_runner.go:195] Run: systemctl --version
	I1231 10:39:33.725110  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:39:33.725151  248388 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:39:33.725205  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:39:33.764466  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:39:33.765016  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:39:33.861869  248388 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:39:33.891797  248388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:39:33.904477  248388 docker.go:158] disabling docker service ...
	I1231 10:39:33.904532  248388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:39:33.916366  248388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:39:33.927743  248388 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:39:34.013024  248388 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:39:34.091153  248388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:39:34.102021  248388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:39:34.116459  248388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1
fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10
KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9
kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:39:34.131061  248388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:39:34.138871  248388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:39:34.147361  248388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:39:34.228786  248388 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:39:34.310283  248388 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:39:34.310366  248388 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:39:34.316663  248388 start.go:458] Will wait 60s for crictl version
	I1231 10:39:34.316739  248388 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:39:34.347621  248388 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:39:34Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:39:45.394578  248388 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:39:45.422381  248388 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:39:45.422454  248388 ssh_runner.go:195] Run: containerd --version
	I1231 10:39:45.445888  248388 ssh_runner.go:195] Run: containerd --version
	I1231 10:39:45.473645  248388 out.go:176] * Preparing Kubernetes v1.16.0 on containerd 1.4.12 ...
	I1231 10:39:45.473739  248388 cli_runner.go:133] Run: docker network inspect old-k8s-version-20211231102602-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:39:45.512179  248388 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1231 10:39:45.516345  248388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:39:45.530412  248388 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:39:45.532672  248388 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:39:45.534826  248388 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:39:45.534904  248388 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1231 10:39:45.534972  248388 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:39:45.562331  248388 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:39:45.562363  248388 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:39:45.562405  248388 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:39:45.588904  248388 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:39:45.588927  248388 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:39:45.588971  248388 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:39:45.617144  248388 cni.go:93] Creating CNI manager for ""
	I1231 10:39:45.617169  248388 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:39:45.617187  248388 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:39:45.617200  248388 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20211231102602-6736 NodeName:old-k8s-version-20211231102602-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:39:45.617337  248388 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-20211231102602-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20211231102602-6736
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.49.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:39:45.617416  248388 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=old-k8s-version-20211231102602-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20211231102602-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1231 10:39:45.617471  248388 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1231 10:39:45.626756  248388 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:39:45.626830  248388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:39:45.634969  248388 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (642 bytes)
	I1231 10:39:45.650571  248388 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1231 10:39:45.667126  248388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I1231 10:39:45.684267  248388 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:39:45.687751  248388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:39:45.698181  248388 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736 for IP: 192.168.49.2
	I1231 10:39:45.698295  248388 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:39:45.698331  248388 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:39:45.698394  248388 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.key
	I1231 10:39:45.698446  248388 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/apiserver.key.dd3b5fb2
	I1231 10:39:45.698482  248388 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/proxy-client.key
	I1231 10:39:45.698570  248388 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:39:45.698600  248388 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:39:45.698611  248388 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:39:45.698633  248388 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:39:45.698653  248388 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:39:45.698673  248388 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:39:45.698710  248388 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:39:45.699579  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:39:45.721393  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1231 10:39:45.741875  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:39:45.762168  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1231 10:39:45.782716  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:39:45.803141  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:39:45.824081  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:39:45.844631  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:39:45.865581  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:39:45.888196  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:39:45.911552  248388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:39:45.932076  248388 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:39:45.947761  248388 ssh_runner.go:195] Run: openssl version
	I1231 10:39:45.953604  248388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:39:45.964111  248388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:39:45.968075  248388 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:39:45.968153  248388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:39:45.974160  248388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:39:45.983159  248388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:39:45.992274  248388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:39:45.996413  248388 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:39:45.996467  248388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:39:46.002116  248388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:39:46.010648  248388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:39:46.019736  248388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:39:46.024515  248388 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:39:46.024587  248388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:39:46.030635  248388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:39:46.039396  248388 kubeadm.go:388] StartCluster: {Name:old-k8s-version-20211231102602-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20211231102602-6736 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true n
ode_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:39:46.039522  248388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:39:46.039635  248388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:39:46.068660  248388 cri.go:87] found id: "9711ffb10b897aecfb60ff957702f91f87cdd75e0701725914ee129e8c6799cb"
	I1231 10:39:46.068685  248388 cri.go:87] found id: "91f9570ac59622f25f47f9d8bf9cfa1ecd2c14cb7722777629a959e7f187d512"
	I1231 10:39:46.068691  248388 cri.go:87] found id: "090a101afa0e5be4c178038538c1438ae269f1339bb853fc4beb2973fd8f69c6"
	I1231 10:39:46.068695  248388 cri.go:87] found id: "a0fea282c2cab22ac98fca2724b604a81ad02188a7148a610d923ce9704541fe"
	I1231 10:39:46.068699  248388 cri.go:87] found id: "fddc6f96e1ab6aff7257a3f3e9e946ae7b0d808bbca6e09ffc2653e63aa5c9e4"
	I1231 10:39:46.068704  248388 cri.go:87] found id: "c5161903fa79820ba4aac6aae4e2aa2335944ccae08a80bec50f7a09bcb290a0"
	I1231 10:39:46.068708  248388 cri.go:87] found id: ""
	I1231 10:39:46.068754  248388 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1231 10:39:46.086307  248388 cri.go:114] JSON = null
	W1231 10:39:46.086363  248388 kubeadm.go:395] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I1231 10:39:46.086418  248388 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:39:46.094669  248388 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1231 10:39:46.102668  248388 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:46.103765  248388 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20211231102602-6736" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:39:46.104304  248388 kubeconfig.go:127] "old-k8s-version-20211231102602-6736" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig - will repair!
	I1231 10:39:46.105141  248388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:39:46.107623  248388 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1231 10:39:46.116159  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:46.116212  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:46.133979  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:46.334456  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:46.334533  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:46.350538  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:46.534837  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:46.534919  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:46.550816  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:46.735119  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:46.735239  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:46.753056  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:46.934288  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:46.934363  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:46.951681  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:47.135023  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:47.135101  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:47.152550  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:47.335029  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:47.335115  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:47.351854  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:47.534112  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:47.534201  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:47.550587  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:47.734908  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:47.734983  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:47.750831  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:47.934079  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:47.934168  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:47.951066  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:48.134297  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:48.134366  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:48.149419  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:48.334062  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:48.334157  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:48.350555  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:48.534878  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:48.534992  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:48.551106  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:48.734409  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:48.734487  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:48.751378  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:48.934729  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:48.934886  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:48.952549  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:49.134824  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:49.134924  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:49.150693  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:39:49.150726  248388 api_server.go:165] Checking apiserver status ...
	I1231 10:39:49.150765  248388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:39:49.166744  248388 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W1231 10:39:49.166812  248388 kubeadm.go:600] needs reconfigure: apiserver error: timed out waiting for the condition
	I1231 10:39:49.166837  248388 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1231 10:39:49.915167  248388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:39:49.927449  248388 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:39:49.935702  248388 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:39:49.935784  248388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:39:49.945133  248388 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:39:49.945201  248388 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:40:01.197512  248388 out.go:203]   - Generating certificates and keys ...
	I1231 10:40:01.199961  248388 out.go:203]   - Booting up control plane ...
	I1231 10:40:01.203074  248388 out.go:203]   - Configuring RBAC rules ...
	I1231 10:40:01.206183  248388 cni.go:93] Creating CNI manager for ""
	I1231 10:40:01.206224  248388 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:40:01.208451  248388 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:40:01.208540  248388 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:40:01.212803  248388 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I1231 10:40:01.212831  248388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:40:01.227527  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:40:01.503411  248388 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:40:01.503539  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=old-k8s-version-20211231102602-6736 minikube.k8s.io/updated_at=2021_12_31T10_40_01_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:01.503559  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:01.617474  248388 ops.go:34] apiserver oom_adj: -16
	I1231 10:40:01.617597  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:02.286294  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:02.785910  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:03.285879  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:03.786120  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:04.286213  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:04.785628  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:05.286526  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:05.785916  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:06.285927  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:06.786520  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:07.286634  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:07.785622  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:08.285621  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:08.785980  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:09.285657  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:09.785778  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:10.286662  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:10.786025  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:11.286281  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:11.786574  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:12.285800  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:12.786532  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:13.286195  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:13.785848  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:14.286306  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:14.785834  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:15.286191  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:15.786611  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:16.285813  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:16.786057  248388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:40:17.183895  248388 kubeadm.go:864] duration metric: took 15.680328365s to wait for elevateKubeSystemPrivileges.
	I1231 10:40:17.184024  248388 kubeadm.go:390] StartCluster complete in 31.144629299s
	I1231 10:40:17.184054  248388 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:40:17.184188  248388 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:40:17.186479  248388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:40:17.705951  248388 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20211231102602-6736" rescaled to 1
	I1231 10:40:17.706014  248388 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.16.0 ControlPlane:true Worker:true}
	I1231 10:40:17.708934  248388 out.go:176] * Verifying Kubernetes components...
	I1231 10:40:17.706076  248388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:40:17.709017  248388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:40:17.706087  248388 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I1231 10:40:17.709130  248388 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20211231102602-6736"
	I1231 10:40:17.709154  248388 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20211231102602-6736"
	W1231 10:40:17.709171  248388 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:40:17.709180  248388 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20211231102602-6736"
	I1231 10:40:17.709197  248388 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-20211231102602-6736"
	I1231 10:40:17.709204  248388 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20211231102602-6736"
	I1231 10:40:17.709207  248388 host.go:66] Checking if "old-k8s-version-20211231102602-6736" exists ...
	I1231 10:40:17.709214  248388 addons.go:153] Setting addon metrics-server=true in "old-k8s-version-20211231102602-6736"
	W1231 10:40:17.709224  248388 addons.go:165] addon metrics-server should already be in state true
	I1231 10:40:17.709252  248388 host.go:66] Checking if "old-k8s-version-20211231102602-6736" exists ...
	I1231 10:40:17.709546  248388 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:40:17.709679  248388 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:40:17.709707  248388 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:40:17.709180  248388 addons.go:65] Setting dashboard=true in profile "old-k8s-version-20211231102602-6736"
	I1231 10:40:17.709815  248388 addons.go:153] Setting addon dashboard=true in "old-k8s-version-20211231102602-6736"
	W1231 10:40:17.709831  248388 addons.go:165] addon dashboard should already be in state true
	I1231 10:40:17.709863  248388 host.go:66] Checking if "old-k8s-version-20211231102602-6736" exists ...
	I1231 10:40:17.706302  248388 config.go:176] Loaded profile config "old-k8s-version-20211231102602-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I1231 10:40:17.710310  248388 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:40:17.779917  248388 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:40:17.782452  248388 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1231 10:40:17.780138  248388 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:40:17.782516  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:40:17.782593  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:40:17.785955  248388 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1231 10:40:17.786036  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1231 10:40:17.786046  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1231 10:40:17.786103  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:40:17.786394  248388 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20211231102602-6736"
	W1231 10:40:17.786421  248388 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:40:17.786450  248388 host.go:66] Checking if "old-k8s-version-20211231102602-6736" exists ...
	I1231 10:40:17.786855  248388 cli_runner.go:133] Run: docker container inspect old-k8s-version-20211231102602-6736 --format={{.State.Status}}
	I1231 10:40:17.791798  248388 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1231 10:40:17.791923  248388 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1231 10:40:17.791936  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1231 10:40:17.792036  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:40:17.849271  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:40:17.850285  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:40:17.858679  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:40:17.864118  248388 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:40:17.864145  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:40:17.864187  248388 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20211231102602-6736
	I1231 10:40:17.908918  248388 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20211231102602-6736" to be "Ready" ...
	I1231 10:40:17.909113  248388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:40:17.911266  248388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/old-k8s-version-20211231102602-6736/id_rsa Username:docker}
	I1231 10:40:18.000420  248388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:40:18.193571  248388 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1231 10:40:18.193674  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1231 10:40:18.199990  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1231 10:40:18.200020  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1231 10:40:18.285365  248388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:40:18.380538  248388 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1231 10:40:18.380663  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1231 10:40:18.381915  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1231 10:40:18.381967  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1231 10:40:18.479807  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1231 10:40:18.479844  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1231 10:40:18.483146  248388 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:40:18.483186  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1231 10:40:18.506042  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1231 10:40:18.506075  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1231 10:40:18.507973  248388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:40:18.587116  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1231 10:40:18.587147  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1231 10:40:18.608838  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1231 10:40:18.608870  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1231 10:40:18.702834  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1231 10:40:18.702926  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1231 10:40:18.781339  248388 start.go:773] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I1231 10:40:18.799607  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1231 10:40:18.799644  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1231 10:40:18.889031  248388 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:40:18.889104  248388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1231 10:40:18.912177  248388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:40:19.001324  248388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.00085472s)
	I1231 10:40:19.487581  248388 addons.go:386] Verifying addon metrics-server=true in "old-k8s-version-20211231102602-6736"
	I1231 10:40:19.890512  248388 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1231 10:40:19.890555  248388 addons.go:417] enableAddons completed in 2.184473142s
	I1231 10:40:19.918068  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:22.417255  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:24.417743  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:26.916948  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:28.917781  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:31.417404  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:33.917310  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:36.417381  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:38.417560  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:40.916598  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:42.916916  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:44.917164  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:47.416836  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:49.916761  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:51.917072  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:53.917106  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:55.918552  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:40:58.418103  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:00.917229  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:03.417718  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:05.417993  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:07.916832  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:09.917062  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:11.917659  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:14.416640  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:16.417274  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:18.417736  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:20.916549  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:22.917390  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:24.917495  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:27.416717  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:29.920179  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:32.416641  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:34.917102  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:36.917265  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:39.417341  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:41.917725  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:44.417286  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:46.916626  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:48.917674  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:51.417587  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:53.916892  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:55.917634  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:41:58.417650  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:00.916883  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:02.917451  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:04.917545  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:07.416653  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:09.417665  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:11.916571  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:13.917242  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:16.417113  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:18.417745  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:20.916702  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:22.917430  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:25.417573  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:27.417667  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:29.918025  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:32.417484  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:34.417793  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:36.917449  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:38.917872  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:41.419241  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:43.918100  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:46.417949  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:48.917157  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:50.917643  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:53.417372  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:55.418335  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:57.917329  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:42:59.917697  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:01.917988  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:04.417783  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:06.417844  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:08.418352  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:10.917621  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:13.417741  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:15.418097  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:17.917171  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:20.417650  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:22.917443  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:24.917803  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:27.417182  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:29.418833  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:31.917246  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:33.918423  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:36.418075  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:38.917717  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:41.417106  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:43.418275  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:45.419818  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:47.917950  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:50.417701  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:52.418425  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:54.917711  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:57.416994  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:59.418293  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:01.918364  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:03.922878  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:06.417497  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:08.419100  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:10.918011  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:12.919429  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:15.417126  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:17.917105  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:17.920187  248388 node_ready.go:38] duration metric: took 4m0.011176212s waiting for node "old-k8s-version-20211231102602-6736" to be "Ready" ...
	I1231 10:44:17.924079  248388 out.go:176] 
	W1231 10:44:17.924288  248388 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:44:17.924318  248388 out.go:241] * 
	* 
	W1231 10:44:17.925165  248388 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1231 10:44:17.927794  248388 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-20211231102602-6736 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20211231102602-6736
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20211231102602-6736:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736",
	        "Created": "2021-12-31T10:26:13.51267746Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 248655,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-12-31T10:39:29.118004867Z",
	            "FinishedAt": "2021-12-31T10:39:27.710647386Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/hostname",
	        "HostsPath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/hosts",
	        "LogPath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736-json.log",
	        "Name": "/old-k8s-version-20211231102602-6736",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20211231102602-6736:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20211231102602-6736",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd-init/diff:/var/lib/docker/overlay2/c534a67c1f4c5b9f1d9e8fda7874029a0b48633454e02a0b6c810aa9cf70f36b/diff:/var/lib/docker/overlay2/de7a9fc119b7ade7c9f3b24bea5108d56a2268af67d230db0cccbbf8190cc502/diff:/var/lib/docker/overlay2/7dde3b10db81d42c3216201bd8676123e0e230bfa173b31e053af8fa206a800e/diff:/var/lib/docker/overlay2/97454ec819da74666dcc72f8e5b4653886c44f3699cecf13a018ecf95d935273/diff:/var/lib/docker/overlay2/f63f64cf22a9591cbcafeb284a265882ea890544f2b083b747047bcf47a45f11/diff:/var/lib/docker/overlay2/85a2123491b64d14f7493b3bc6a519d9df28cc3680976b9e5a7ecf783be5a304/diff:/var/lib/docker/overlay2/2ec15009b968007c3be12325edca4c12821b52fc616ace66d456038c7f9b8b26/diff:/var/lib/docker/overlay2/c45e9285ae7064da0be5f07d29b035ad54f64e529db00ffe6f2dce6ad543dccf/diff:/var/lib/docker/overlay2/d95000893ca52351a7804b110df43c1656b4bc7fbe4940c9e0ff2744233b1f93/diff:/var/lib/docker/overlay2/38e3bf
07bc49f88e1e7214644d31095a6178f5156c6a5ff3fbcaf8676336a52d/diff:/var/lib/docker/overlay2/fe45a9a5036ea98542c5fd7808e66f8b6ffe9d0c545de700f7cbe39d6280e0e3/diff:/var/lib/docker/overlay2/8cd90b043b30c66cbeacbe4ca21881d85717c6e16f1fe627e2eecdeaf461da6c/diff:/var/lib/docker/overlay2/7221b66a068ae65d9a1c089fe5c60e7230aa950b3d47eb7e90a1776c2a45931b/diff:/var/lib/docker/overlay2/86ce10ecfb90cfd8e1f7d722715353b823ddfa58b93eefa2c1efffff899916ee/diff:/var/lib/docker/overlay2/12ea1316fade88c6e1d2272d6dbeb629875ff96f4dd6f6ead0ff6a4e7e3a3067/diff:/var/lib/docker/overlay2/0fe276873166ac11302901f1bb2c547c71e23fac2197e717e789b6ce7d1eb3ce/diff:/var/lib/docker/overlay2/81bd863acb94ff576a1f1f046ee87693930a12209e915a09f2a720c77d1357f3/diff:/var/lib/docker/overlay2/8875d87537d51e137e2fbff41fe61dadeefa46057e0c93d257ec4754b5ba95f3/diff:/var/lib/docker/overlay2/86e75ae4c708ad2e78d21326ca3b7f552b7903d8c7a0a66e086feaacd3a8c002/diff:/var/lib/docker/overlay2/c688ba4459332409ab260296b96d566d186d307b81308fb3b59724fe6fed3c7d/diff:/var/lib/d
ocker/overlay2/45c4b946a887c9de4f592591b3e65041083baba75d5bed9b0995339de1cafd02/diff:/var/lib/docker/overlay2/3927af3e4dc5e4af94ac76cbe5a3b9759b80e10646aabc655f5c9b56ae13e650/diff:/var/lib/docker/overlay2/11dd3efafe062edcbd4dd837545d570514492786b08abf77f57fe300d225eaa5/diff:/var/lib/docker/overlay2/b8e0f4d5369eb0c633092d9e21501d99ccb27fafaad6667209e1d988c337ae9b/diff:/var/lib/docker/overlay2/c605476b22b48b266163ca6ee5fcb52b15458f1d7a77458390109b6e795f72f3/diff:/var/lib/docker/overlay2/19b31873fbc8858f6b143ea8dc98572f32110c70c29732285bdea89dd6163287/diff:/var/lib/docker/overlay2/171afcaaccc199dc3a653d1bf42453959e7c6bd7bc1e8340f94454b5f5fb6d2d/diff:/var/lib/docker/overlay2/2000b0fa8d671b6e34ec28228440eb7e6a5ddcd6b2cb657836d8522e3fb4454e/diff:/var/lib/docker/overlay2/1294bf56384681b45b611fe07d7b32c6c87a67eee568452d1f8123bf319f7a4c/diff:/var/lib/docker/overlay2/3199bb762e90968c1998250099f781c0e8954f4d62df39a80a384b9208149421/diff:/var/lib/docker/overlay2/036a54cd701c84ee783285b30f6cf33b92858f1a11ebd5db15cffffea12
adcd9/diff:/var/lib/docker/overlay2/552a166f9d1f0d79a7c45c08db35c6db9366e5d8e1321b62ffc3d8a9ea458797/diff:/var/lib/docker/overlay2/98f34e7af675026bd6dd9aaf43b996b5a992e04fe33f238d16d38e7ccad2fb91/diff:/var/lib/docker/overlay2/c1ed97ca4514bd37713c82daa6ba805285f12b5aa51599e9f915b8755fbf55a7/diff:/var/lib/docker/overlay2/86b8ba00550442fc509be298ff07205e99a171d56a70a2e1735cdfd4a9ab745e/diff:/var/lib/docker/overlay2/109092c1424dfee38a4eb63e39a9bb5d3de4b011f84e30bd021bf73ee2f47aa1/diff:/var/lib/docker/overlay2/87fd60ac9e5dd5a701def2c0a9e1c64463bd4779be06a990c86e5561a4349312/diff:/var/lib/docker/overlay2/145545da7fd578da8a41adec4cec3a9b347b4e609c900ee9365ae7735c7859c6/diff:/var/lib/docker/overlay2/e6a20d33de71269404b7fdc1fa6f915dcbc5a337e0625ef4536cb942bb55c19b/diff:/var/lib/docker/overlay2/38829872ccac03d0dae45b93d1f34e61fd179b6fc19f22fd27d4431ee773090f/diff:/var/lib/docker/overlay2/3c7fa7cc3994b36d6b984ebe7c46b535829f02814130486c4ecf5a0976f4f7dc/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20211231102602-6736",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20211231102602-6736/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20211231102602-6736",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20211231102602-6736",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20211231102602-6736",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7bcb66e32570c51223584d89c06c38407a807612f74bbcd0645dab033af753ae",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49422"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49421"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49418"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49419"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7bcb66e32570",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20211231102602-6736": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5984218b7d48",
	                        "old-k8s-version-20211231102602-6736"
	                    ],
	                    "NetworkID": "689da033f191c821bd60ad0334b0149b7450bc9a9e69f2e467eaea0327517488",
	                    "EndpointID": "b5fcec0b7d4b06090fe9be385801ede5fd25d0e4d16b5573d54b18438c62a2e6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20211231102602-6736 -n old-k8s-version-20211231102602-6736
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20211231102602-6736 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-20211231102602-6736 logs -n 25: (1.057444701s)
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:26 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20211231103229-6736      | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:29 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | disable-driver-mounts-20211231103229-6736                  |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20211231102928-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:30 UTC | Fri, 31 Dec 2021 10:32:30 UTC |
	|         | no-preload-20211231102928-6736                             |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20211231103230-6736 --memory=2200            | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:32:30 UTC | Fri, 31 Dec 2021 10:33:37 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:37 UTC | Fri, 31 Dec 2021 10:33:38 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:38 UTC | Fri, 31 Dec 2021 10:33:58 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:58 UTC | Fri, 31 Dec 2021 10:33:58 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20211231103230-6736 --memory=2200            | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:58 UTC | Fri, 31 Dec 2021 10:34:51 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:52 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:53 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                            | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:53 UTC | Fri, 31 Dec 2021 10:34:54 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:54 UTC | Fri, 31 Dec 2021 10:34:55 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:56 UTC | Fri, 31 Dec 2021 10:34:59 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:59 UTC | Fri, 31 Dec 2021 10:35:00 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736             | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:37:37 UTC | Fri, 31 Dec 2021 10:37:38 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:02 UTC | Fri, 31 Dec 2021 10:39:03 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:05 UTC | Fri, 31 Dec 2021 10:39:06 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:06 UTC | Fri, 31 Dec 2021 10:39:07 UTC |
	|         | old-k8s-version-20211231102602-6736                        |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:07 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736                        |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:28 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736                        |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                            | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:57 UTC | Fri, 31 Dec 2021 10:42:58 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                            | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:59 UTC | Fri, 31 Dec 2021 10:43:00 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:01 UTC | Fri, 31 Dec 2021 10:43:02 UTC |
	|         | embed-certs-20211231102953-6736                            |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:02 UTC | Fri, 31 Dec 2021 10:43:22 UTC |
	|         | embed-certs-20211231102953-6736                            |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:22 UTC | Fri, 31 Dec 2021 10:43:22 UTC |
	|         | embed-certs-20211231102953-6736                            |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/31 10:43:22
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.17.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1231 10:43:22.844642  253675 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:43:22.844763  253675 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:43:22.844769  253675 out.go:310] Setting ErrFile to fd 2...
	I1231 10:43:22.844775  253675 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:43:22.844954  253675 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:43:22.845319  253675 out.go:304] Setting JSON to false
	I1231 10:43:22.847068  253675 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5157,"bootTime":1640942245,"procs":267,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:43:22.847193  253675 start.go:122] virtualization: kvm guest
	I1231 10:43:22.850701  253675 out.go:176] * [embed-certs-20211231102953-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:43:22.853129  253675 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:43:22.850948  253675 notify.go:174] Checking for updates...
	I1231 10:43:22.855641  253675 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:43:22.857638  253675 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:43:22.860223  253675 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:43:22.862455  253675 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:43:22.862933  253675 config.go:176] Loaded profile config "embed-certs-20211231102953-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:43:22.863367  253675 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:43:22.907454  253675 docker.go:132] docker version: linux-20.10.12
	I1231 10:43:22.907559  253675 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:43:23.010925  253675 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:43:22.94341606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:43:23.011080  253675 docker.go:237] overlay module found
	I1231 10:43:23.014207  253675 out.go:176] * Using the docker driver based on existing profile
	I1231 10:43:23.014243  253675 start.go:280] selected driver: docker
	I1231 10:43:23.014249  253675 start.go:795] validating driver "docker" against &{Name:embed-certs-20211231102953-6736 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true ku
belet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:43:23.014391  253675 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:43:23.014412  253675 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:43:23.014421  253675 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:43:23.014467  253675 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:43:23.014493  253675 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:43:23.017136  253675 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:43:23.017884  253675 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:43:23.116838  253675 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:43:23.05133577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	W1231 10:43:23.116982  253675 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:43:23.117011  253675 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:43:23.119638  253675 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:43:23.119774  253675 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1231 10:43:23.119804  253675 cni.go:93] Creating CNI manager for ""
	I1231 10:43:23.119812  253675 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:43:23.119829  253675 start_flags.go:298] config:
	{Name:embed-certs-20211231102953-6736 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop
:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:43:23.122399  253675 out.go:176] * Starting control plane node embed-certs-20211231102953-6736 in cluster embed-certs-20211231102953-6736
	I1231 10:43:23.122462  253675 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:43:23.124490  253675 out.go:176] * Pulling base image ...
	I1231 10:43:23.124541  253675 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:43:23.124581  253675 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4
	I1231 10:43:23.124590  253675 cache.go:57] Caching tarball of preloaded images
	I1231 10:43:23.124659  253675 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:43:23.124888  253675 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:43:23.124904  253675 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.1 on containerd
	I1231 10:43:23.125057  253675 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/config.json ...
	I1231 10:43:23.163843  253675 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:43:23.163872  253675 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:43:23.163888  253675 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:43:23.163917  253675 start.go:313] acquiring machines lock for embed-certs-20211231102953-6736: {Name:mk30ade561e73ed15bb546a531be6f54b6b9c072 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:43:23.164009  253675 start.go:317] acquired machines lock for "embed-certs-20211231102953-6736" in 74.119µs
	I1231 10:43:23.164031  253675 start.go:93] Skipping create...Using existing machine configuration
	I1231 10:43:23.164039  253675 fix.go:55] fixHost starting: 
	I1231 10:43:23.164295  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:43:23.199236  253675 fix.go:108] recreateIfNeeded on embed-certs-20211231102953-6736: state=Stopped err=<nil>
	W1231 10:43:23.199269  253675 fix.go:134] unexpected machine state, will restart: <nil>
	I1231 10:43:20.417650  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:22.917443  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:23.202320  253675 out.go:176] * Restarting existing docker container for "embed-certs-20211231102953-6736" ...
	I1231 10:43:23.202389  253675 cli_runner.go:133] Run: docker start embed-certs-20211231102953-6736
	I1231 10:43:23.625205  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:43:23.664982  253675 kic.go:420] container "embed-certs-20211231102953-6736" state is running.
	I1231 10:43:23.665431  253675 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211231102953-6736
	I1231 10:43:23.703812  253675 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/config.json ...
	I1231 10:43:23.704117  253675 machine.go:88] provisioning docker machine ...
	I1231 10:43:23.704144  253675 ubuntu.go:169] provisioning hostname "embed-certs-20211231102953-6736"
	I1231 10:43:23.704223  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:23.742698  253675 main.go:130] libmachine: Using SSH client type: native
	I1231 10:43:23.743011  253675 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I1231 10:43:23.743039  253675 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20211231102953-6736 && echo "embed-certs-20211231102953-6736" | sudo tee /etc/hostname
	I1231 10:43:23.743711  253675 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58602->127.0.0.1:49427: read: connection reset by peer
	I1231 10:43:26.891264  253675 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20211231102953-6736
	
	I1231 10:43:26.891349  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:26.930929  253675 main.go:130] libmachine: Using SSH client type: native
	I1231 10:43:26.931119  253675 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I1231 10:43:26.931150  253675 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20211231102953-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20211231102953-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20211231102953-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:43:27.068707  253675 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:43:27.068740  253675 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:43:27.068790  253675 ubuntu.go:177] setting up certificates
	I1231 10:43:27.068818  253675 provision.go:83] configureAuth start
	I1231 10:43:27.068869  253675 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211231102953-6736
	I1231 10:43:27.106090  253675 provision.go:138] copyHostCerts
	I1231 10:43:27.106158  253675 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:43:27.106172  253675 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:43:27.106233  253675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:43:27.106338  253675 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:43:27.106358  253675 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:43:27.106382  253675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:43:27.106444  253675 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:43:27.106453  253675 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:43:27.106472  253675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:43:27.106526  253675 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20211231102953-6736 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20211231102953-6736]
	I1231 10:43:27.255618  253675 provision.go:172] copyRemoteCerts
	I1231 10:43:27.255688  253675 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:43:27.255719  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:27.293465  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:43:27.393419  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:43:27.414701  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I1231 10:43:27.438482  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1231 10:43:27.461503  253675 provision.go:86] duration metric: configureAuth took 392.669293ms
	I1231 10:43:27.461542  253675 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:43:27.461744  253675 config.go:176] Loaded profile config "embed-certs-20211231102953-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:43:27.461759  253675 machine.go:91] provisioned docker machine in 3.757626792s
	I1231 10:43:27.461767  253675 start.go:267] post-start starting for "embed-certs-20211231102953-6736" (driver="docker")
	I1231 10:43:27.461773  253675 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:43:27.461808  253675 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:43:27.461836  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:27.504760  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:43:27.605497  253675 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:43:27.609459  253675 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:43:27.609488  253675 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:43:27.609499  253675 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:43:27.609505  253675 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:43:27.609516  253675 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:43:27.609580  253675 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:43:27.609669  253675 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:43:27.609751  253675 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:43:27.618322  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:43:27.638480  253675 start.go:270] post-start completed in 176.700691ms
	I1231 10:43:27.638544  253675 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:43:27.638578  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:27.678338  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:43:27.773349  253675 fix.go:57] fixHost completed within 4.609301543s
	I1231 10:43:27.773377  253675 start.go:80] releasing machines lock for "embed-certs-20211231102953-6736", held for 4.609356997s
	I1231 10:43:27.773448  253675 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211231102953-6736
	I1231 10:43:24.917803  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:27.417182  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:27.808989  253675 ssh_runner.go:195] Run: systemctl --version
	I1231 10:43:27.809043  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:27.809080  253675 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:43:27.809149  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:27.849360  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:43:27.849710  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:43:27.941036  253675 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:43:27.971837  253675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:43:27.982942  253675 docker.go:158] disabling docker service ...
	I1231 10:43:27.983000  253675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:43:27.994466  253675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:43:28.005201  253675 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:43:28.084281  253675 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:43:28.165947  253675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:43:28.176963  253675 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:43:28.193845  253675 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:43:28.210061  253675 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:43:28.218904  253675 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:43:28.227395  253675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:43:28.309175  253675 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:43:28.390283  253675 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:43:28.390355  253675 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:43:28.396380  253675 start.go:458] Will wait 60s for crictl version
	I1231 10:43:28.396511  253675 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:43:28.426104  253675 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:43:28Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:43:29.418833  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:31.917246  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:33.918423  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:36.418075  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:39.474533  253675 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:43:39.501276  253675 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:43:39.501336  253675 ssh_runner.go:195] Run: containerd --version
	I1231 10:43:39.527002  253675 ssh_runner.go:195] Run: containerd --version
	I1231 10:43:39.551133  253675 out.go:176] * Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	I1231 10:43:39.551225  253675 cli_runner.go:133] Run: docker network inspect embed-certs-20211231102953-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:43:39.587623  253675 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1231 10:43:39.591336  253675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:43:39.604414  253675 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:43:39.606523  253675 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:43:39.608679  253675 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:43:39.608778  253675 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:43:39.608844  253675 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:43:39.634556  253675 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:43:39.634585  253675 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:43:39.634630  253675 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:43:39.662182  253675 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:43:39.662208  253675 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:43:39.662251  253675 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:43:39.687863  253675 cni.go:93] Creating CNI manager for ""
	I1231 10:43:39.687887  253675 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:43:39.687902  253675 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:43:39.687916  253675 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20211231102953-6736 NodeName:embed-certs-20211231102953-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:43:39.688044  253675 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20211231102953-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:43:39.688123  253675 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=embed-certs-20211231102953-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1231 10:43:39.688169  253675 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.1
	I1231 10:43:39.696210  253675 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:43:39.696312  253675 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:43:39.704267  253675 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (638 bytes)
	I1231 10:43:39.718589  253675 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1231 10:43:39.734360  253675 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2081 bytes)
	I1231 10:43:39.749132  253675 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:43:39.753026  253675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:43:39.764036  253675 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736 for IP: 192.168.58.2
	I1231 10:43:39.764162  253675 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:43:39.764206  253675 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:43:39.764332  253675 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/client.key
	I1231 10:43:39.764393  253675 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.key.cee25041
	I1231 10:43:39.764430  253675 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.key
	I1231 10:43:39.764535  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:43:39.764569  253675 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:43:39.764576  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:43:39.764600  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:43:39.764619  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:43:39.764640  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:43:39.764679  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:43:39.765624  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:43:39.786589  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1231 10:43:39.806214  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:43:39.827437  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1231 10:43:39.847223  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:43:39.869717  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:43:39.892296  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:43:39.915269  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:43:39.940596  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:43:39.965015  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:43:39.987472  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:43:40.008065  253675 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:43:40.023700  253675 ssh_runner.go:195] Run: openssl version
	I1231 10:43:40.029648  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:43:40.038817  253675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:43:40.042994  253675 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:43:40.043064  253675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:43:40.049114  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:43:40.057598  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:43:40.067157  253675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:43:40.071141  253675 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:43:40.071208  253675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:43:40.077176  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:43:40.085041  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:43:40.093428  253675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:43:40.097387  253675 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:43:40.097447  253675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:43:40.102969  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:43:40.110890  253675 kubeadm.go:388] StartCluster: {Name:embed-certs-20211231102953-6736 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_read
y:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:43:40.110993  253675 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:43:40.111061  253675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:43:40.137789  253675 cri.go:87] found id: "c4090927d59b8d0231d9972079e3b14697c8f3127d96ddaed42ac933ada12239"
	I1231 10:43:40.137839  253675 cri.go:87] found id: "03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f"
	I1231 10:43:40.137847  253675 cri.go:87] found id: "a380c0d98153cc2ab6095cc8b6eaf0192543775721df6c9cb6c5e4e510d1636d"
	I1231 10:43:40.137854  253675 cri.go:87] found id: "7de9215e7da17128bb40a41f2f520ecf92f3eb1e98a7c3510a228471a97c4f2f"
	I1231 10:43:40.137861  253675 cri.go:87] found id: "bd3c847642a9fe6d581ed824b26b0b73d8344e03d63122f560cf62e61a262cb3"
	I1231 10:43:40.137868  253675 cri.go:87] found id: "4d064efc3679beff93c0b83a48f6fdc82cb98e282e856beb780502ca855801a3"
	I1231 10:43:40.137875  253675 cri.go:87] found id: "eb97b3087125dceba3df5197317e35791b4ca2f794effdfa4d5118543d9d3072"
	I1231 10:43:40.137883  253675 cri.go:87] found id: ""
	I1231 10:43:40.137935  253675 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1231 10:43:40.152791  253675 cri.go:114] JSON = null
	W1231 10:43:40.152840  253675 kubeadm.go:395] unpause failed: list paused: list returned 0 containers, but ps returned 7
	I1231 10:43:40.152904  253675 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:43:40.161320  253675 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1231 10:43:40.168524  253675 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:40.169402  253675 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20211231102953-6736" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:43:40.169777  253675 kubeconfig.go:127] "embed-certs-20211231102953-6736" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig - will repair!
	I1231 10:43:40.170362  253675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:43:40.172686  253675 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1231 10:43:40.180820  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:40.180878  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:40.195345  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:40.395812  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:40.395894  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:40.412496  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:40.595573  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:40.595660  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:40.610754  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:40.795993  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:40.796074  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:40.812031  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:40.996111  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:40.996181  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:41.011422  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:41.195687  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:41.195777  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:41.211094  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:41.396304  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:41.396402  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:41.413206  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:41.595459  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:41.595552  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:41.611792  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:41.796061  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:41.796162  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:41.811694  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:41.995913  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:41.995991  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:42.013353  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:42.195522  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:42.195645  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:42.212206  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:42.396496  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:42.396584  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:42.414476  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:42.595660  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:42.595748  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:42.611643  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:38.917717  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:41.417106  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:42.796314  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:42.797268  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:42.814486  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:42.995563  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:42.995659  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:43.011246  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:43.195463  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:43.195552  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:43.211623  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:43.211657  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:43.211698  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:43.227133  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W1231 10:43:43.227160  253675 kubeadm.go:600] needs reconfigure: apiserver error: timed out waiting for the condition
	I1231 10:43:43.227194  253675 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1231 10:43:43.962869  253675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:43:43.975076  253675 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:43:43.984301  253675 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:43:43.984358  253675 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:43:43.992612  253675 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:43:43.992653  253675 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:43:44.298681  253675 out.go:203]   - Generating certificates and keys ...
	I1231 10:43:45.481520  253675 out.go:203]   - Booting up control plane ...
	I1231 10:43:43.418275  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:45.419818  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:47.917950  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:50.417701  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:52.418425  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:54.917711  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:57.416994  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:43:58.029054  253675 out.go:203]   - Configuring RBAC rules ...
	I1231 10:43:58.483326  253675 cni.go:93] Creating CNI manager for ""
	I1231 10:43:58.483357  253675 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:43:58.488726  253675 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:43:58.488827  253675 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:43:58.493521  253675 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.1/kubectl ...
	I1231 10:43:58.493558  253675 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:43:58.512259  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:43:59.192508  253675 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:43:59.192665  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:43:59.192694  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=embed-certs-20211231102953-6736 minikube.k8s.io/updated_at=2021_12_31T10_43_59_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:43:59.213675  253675 ops.go:34] apiserver oom_adj: -16
	I1231 10:43:59.322906  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:43:59.895946  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:00.395692  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:00.895655  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:01.395632  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:01.896412  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:02.395407  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:43:59.418293  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:01.918364  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:02.896452  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:03.396392  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:03.895366  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:04.396336  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:04.895859  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:05.395565  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:05.895587  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:06.395343  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:06.895271  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:07.395321  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:03.922878  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:06.417497  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:07.896038  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:08.396421  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:08.895347  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:09.395521  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:09.895830  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:10.395802  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:10.501742  253675 kubeadm.go:864] duration metric: took 11.30912297s to wait for elevateKubeSystemPrivileges.
	I1231 10:44:10.501806  253675 kubeadm.go:390] StartCluster complete in 30.390935465s
	I1231 10:44:10.501833  253675 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:44:10.501996  253675 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:44:10.504119  253675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:44:11.026995  253675 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20211231102953-6736" rescaled to 1
	I1231 10:44:11.027076  253675 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:44:11.030066  253675 out.go:176] * Verifying Kubernetes components...
	I1231 10:44:11.027250  253675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:44:11.027529  253675 config.go:176] Loaded profile config "embed-certs-20211231102953-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:44:11.027546  253675 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I1231 10:44:11.030253  253675 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20211231102953-6736"
	I1231 10:44:11.030274  253675 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20211231102953-6736"
	W1231 10:44:11.030298  253675 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:44:11.030347  253675 host.go:66] Checking if "embed-certs-20211231102953-6736" exists ...
	I1231 10:44:11.030509  253675 addons.go:65] Setting dashboard=true in profile "embed-certs-20211231102953-6736"
	I1231 10:44:11.030533  253675 addons.go:153] Setting addon dashboard=true in "embed-certs-20211231102953-6736"
	I1231 10:44:11.030537  253675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W1231 10:44:11.030543  253675 addons.go:165] addon dashboard should already be in state true
	I1231 10:44:11.030572  253675 host.go:66] Checking if "embed-certs-20211231102953-6736" exists ...
	I1231 10:44:11.030638  253675 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20211231102953-6736"
	I1231 10:44:11.030651  253675 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20211231102953-6736"
	I1231 10:44:11.030970  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:44:11.031137  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:44:11.031253  253675 addons.go:65] Setting metrics-server=true in profile "embed-certs-20211231102953-6736"
	I1231 10:44:11.031280  253675 addons.go:153] Setting addon metrics-server=true in "embed-certs-20211231102953-6736"
	W1231 10:44:11.031288  253675 addons.go:165] addon metrics-server should already be in state true
	I1231 10:44:11.031157  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:44:11.031313  253675 host.go:66] Checking if "embed-certs-20211231102953-6736" exists ...
	I1231 10:44:11.031695  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:44:11.094017  253675 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1231 10:44:11.101006  253675 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1231 10:44:11.100849  253675 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20211231102953-6736" to be "Ready" ...
	I1231 10:44:11.102159  253675 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1231 10:44:11.112212  253675 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:44:11.109078  253675 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20211231102953-6736"
	I1231 10:44:11.109381  253675 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1231 10:44:11.109540  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	W1231 10:44:11.112312  253675 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:44:11.112367  253675 host.go:66] Checking if "embed-certs-20211231102953-6736" exists ...
	I1231 10:44:11.112422  253675 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:44:11.112434  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:44:11.112456  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1231 10:44:11.112471  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1231 10:44:11.112489  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:44:11.112497  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:44:11.112389  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:44:11.112946  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:44:11.168999  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:44:11.169017  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:44:11.169333  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:44:11.171810  253675 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:44:11.171837  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:44:11.171897  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:44:11.220524  253675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:44:11.225393  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:44:11.379784  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1231 10:44:11.379826  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1231 10:44:11.383351  253675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:44:11.479751  253675 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1231 10:44:11.479789  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1231 10:44:11.481212  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1231 10:44:11.481233  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1231 10:44:11.581046  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1231 10:44:11.581120  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1231 10:44:11.582256  253675 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1231 10:44:11.582344  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1231 10:44:11.587594  253675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:44:11.679970  253675 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:44:11.680004  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1231 10:44:11.682163  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1231 10:44:11.682192  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1231 10:44:11.791172  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1231 10:44:11.791211  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1231 10:44:11.791675  253675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:44:11.895732  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1231 10:44:11.895775  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1231 10:44:11.995782  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1231 10:44:11.995814  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1231 10:44:12.085513  253675 start.go:773] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I1231 10:44:12.099705  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1231 10:44:12.099792  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1231 10:44:12.194606  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:44:12.194725  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1231 10:44:12.297993  253675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:44:12.500547  253675 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.117122608s)
	I1231 10:44:08.419100  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:10.918011  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:12.919429  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:12.995685  253675 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.203961788s)
	I1231 10:44:12.995735  253675 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20211231102953-6736"
	I1231 10:44:13.179949  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:14.102814  253675 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.804765707s)
	I1231 10:44:14.106088  253675 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1231 10:44:14.106137  253675 addons.go:417] enableAddons completed in 3.078602112s
	I1231 10:44:15.629711  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:17.630121  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:15.417126  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:17.917105  248388 node_ready.go:58] node "old-k8s-version-20211231102602-6736" has status "Ready":"False"
	I1231 10:44:17.920187  248388 node_ready.go:38] duration metric: took 4m0.011176212s waiting for node "old-k8s-version-20211231102602-6736" to be "Ready" ...
	I1231 10:44:17.924079  248388 out.go:176] 
	W1231 10:44:17.924288  248388 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:44:17.924318  248388 out.go:241] * 
	W1231 10:44:17.925165  248388 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	5901cfe67efb1       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   8e9c4ebe9af8a
	260191439414c       c21b0c7400f98       4 minutes ago        Running             kube-proxy                0                   27df6e69859e8
	1b18c6c3f72c7       6de166512aa22       4 minutes ago        Exited              kindnet-cni               0                   8e9c4ebe9af8a
	360d04f1d2e49       b305571ca60a5       4 minutes ago        Running             kube-apiserver            0                   9a26f93849781
	e488bccab2c37       06a629a7e51cd       4 minutes ago        Running             kube-controller-manager   0                   3c7deabb07da8
	02f5bc6f1fdd0       b2756210eeabf       4 minutes ago        Running             etcd                      0                   9d962dd90af06
	b939ed1a80a18       301ddc62b80b1       4 minutes ago        Running             kube-scheduler            0                   416af3a4e9b8c
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-12-31 10:39:29 UTC, end at Fri 2021-12-31 10:44:19 UTC. --
	Dec 31 10:39:52 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:39:52.097710781Z" level=info msg="StartContainer for \"02f5bc6f1fdd0081ac22b1216606e6a5da1908f6dd8b37174cb86189c9245c90\" returns successfully"
	Dec 31 10:40:17 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:40:17.092377569Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Dec 31 10:40:17 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:40:17.380693037Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-wttrw,Uid:c6532ac6-8d82-4c81-b651-adeb7a219e08,Namespace:kube-system,Attempt:0,}"
	Dec 31 10:40:17 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:40:17.399037577Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e9c4ebe9af8a969d9754b6172d689da6fb7538703edbb7efc950442d5ff54ed pid=1843
	Dec 31 10:40:17 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:40:17.482305697Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-7nkns,Uid:1d4a50e5-85e7-4191-833a-a5127b283fa8,Namespace:kube-system,Attempt:0,}"
	Dec 31 10:40:17 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:40:17.504920540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-wttrw,Uid:c6532ac6-8d82-4c81-b651-adeb7a219e08,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e9c4ebe9af8a969d9754b6172d689da6fb7538703edbb7efc950442d5ff54ed\""
	Dec 31 10:40:17 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:40:17.505332642Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/27df6e69859e89d09c8ab47583e5daf823c0bf6b3aaae699d1e537545dc64f1d pid=1885
	Dec 31 10:40:17 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:40:17.507612512Z" level=info msg="CreateContainer within sandbox \"8e9c4ebe9af8a969d9754b6172d689da6fb7538703edbb7efc950442d5ff54ed\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Dec 31 10:40:17 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:40:17.534762583Z" level=info msg="CreateContainer within sandbox \"8e9c4ebe9af8a969d9754b6172d689da6fb7538703edbb7efc950442d5ff54ed\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"1b18c6c3f72c716ba7cb4ddc3bb237e6da85278760164b7eee6cc3e1af4ba4b4\""
	Dec 31 10:40:17 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:40:17.535398131Z" level=info msg="StartContainer for \"1b18c6c3f72c716ba7cb4ddc3bb237e6da85278760164b7eee6cc3e1af4ba4b4\""
	Dec 31 10:40:17 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:40:17.595409903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7nkns,Uid:1d4a50e5-85e7-4191-833a-a5127b283fa8,Namespace:kube-system,Attempt:0,} returns sandbox id \"27df6e69859e89d09c8ab47583e5daf823c0bf6b3aaae699d1e537545dc64f1d\""
	Dec 31 10:40:17 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:40:17.598382599Z" level=info msg="CreateContainer within sandbox \"27df6e69859e89d09c8ab47583e5daf823c0bf6b3aaae699d1e537545dc64f1d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Dec 31 10:40:17 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:40:17.622612779Z" level=info msg="CreateContainer within sandbox \"27df6e69859e89d09c8ab47583e5daf823c0bf6b3aaae699d1e537545dc64f1d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"260191439414cb571c50079bad300e6fcefc8412455207a3187344dc06e156e8\""
	Dec 31 10:40:17 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:40:17.623329382Z" level=info msg="StartContainer for \"260191439414cb571c50079bad300e6fcefc8412455207a3187344dc06e156e8\""
	Dec 31 10:40:17 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:40:17.702540187Z" level=info msg="StartContainer for \"260191439414cb571c50079bad300e6fcefc8412455207a3187344dc06e156e8\" returns successfully"
	Dec 31 10:40:17 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:40:17.908184405Z" level=info msg="StartContainer for \"1b18c6c3f72c716ba7cb4ddc3bb237e6da85278760164b7eee6cc3e1af4ba4b4\" returns successfully"
	Dec 31 10:42:58 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:42:58.295764908Z" level=info msg="Finish piping stderr of container \"1b18c6c3f72c716ba7cb4ddc3bb237e6da85278760164b7eee6cc3e1af4ba4b4\""
	Dec 31 10:42:58 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:42:58.295856933Z" level=info msg="Finish piping stdout of container \"1b18c6c3f72c716ba7cb4ddc3bb237e6da85278760164b7eee6cc3e1af4ba4b4\""
	Dec 31 10:42:58 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:42:58.298168547Z" level=info msg="TaskExit event &TaskExit{ContainerID:1b18c6c3f72c716ba7cb4ddc3bb237e6da85278760164b7eee6cc3e1af4ba4b4,ID:1b18c6c3f72c716ba7cb4ddc3bb237e6da85278760164b7eee6cc3e1af4ba4b4,Pid:1934,ExitStatus:2,ExitedAt:2021-12-31 10:42:58.297760875 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:42:58 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:42:58.335553455Z" level=info msg="shim disconnected" id=1b18c6c3f72c716ba7cb4ddc3bb237e6da85278760164b7eee6cc3e1af4ba4b4
	Dec 31 10:42:58 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:42:58.335661341Z" level=error msg="copy shim log" error="read /proc/self/fd/79: file already closed"
	Dec 31 10:42:59 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:42:59.239430413Z" level=info msg="CreateContainer within sandbox \"8e9c4ebe9af8a969d9754b6172d689da6fb7538703edbb7efc950442d5ff54ed\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Dec 31 10:42:59 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:42:59.282739506Z" level=info msg="CreateContainer within sandbox \"8e9c4ebe9af8a969d9754b6172d689da6fb7538703edbb7efc950442d5ff54ed\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"5901cfe67efb1a9a3756aa165a9fd6dcc93d517f15984cb489ba3da53d87e4e0\""
	Dec 31 10:42:59 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:42:59.283516691Z" level=info msg="StartContainer for \"5901cfe67efb1a9a3756aa165a9fd6dcc93d517f15984cb489ba3da53d87e4e0\""
	Dec 31 10:42:59 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:42:59.503500502Z" level=info msg="StartContainer for \"5901cfe67efb1a9a3756aa165a9fd6dcc93d517f15984cb489ba3da53d87e4e0\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20211231102602-6736
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20211231102602-6736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb
	                    minikube.k8s.io/name=old-k8s-version-20211231102602-6736
	                    minikube.k8s.io/updated_at=2021_12_31T10_40_01_0700
	                    minikube.k8s.io/version=v1.24.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 Dec 2021 10:39:56 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 Dec 2021 10:43:56 +0000   Fri, 31 Dec 2021 10:39:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 Dec 2021 10:43:56 +0000   Fri, 31 Dec 2021 10:39:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 Dec 2021 10:43:56 +0000   Fri, 31 Dec 2021 10:39:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 31 Dec 2021 10:43:56 +0000   Fri, 31 Dec 2021 10:39:52 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    old-k8s-version-20211231102602-6736
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32879780Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32879780Ki
	 pods:               110
	System Info:
	 Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	 System UUID:                5a8cca94-3bdf-4013-adda-72ef27798431
	 Boot ID:                    e6f4bca3-a61e-4e63-a68d-0c73a5c2c999
	 Kernel Version:             5.11.0-1023-gcp
	 OS Image:                   Ubuntu 20.04.2 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.4.12
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-20211231102602-6736                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	  kube-system                kindnet-wttrw                                                  100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m3s
	  kube-system                kube-apiserver-old-k8s-version-20211231102602-6736             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  kube-system                kube-controller-manager-old-k8s-version-20211231102602-6736    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m6s
	  kube-system                kube-proxy-7nkns                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                kube-scheduler-old-k8s-version-20211231102602-6736             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                             Message
	  ----    ------                   ----                   ----                                             -------
	  Normal  NodeHasSufficientMemory  4m28s (x8 over 4m29s)  kubelet, old-k8s-version-20211231102602-6736     Node old-k8s-version-20211231102602-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m28s (x8 over 4m29s)  kubelet, old-k8s-version-20211231102602-6736     Node old-k8s-version-20211231102602-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m28s (x7 over 4m29s)  kubelet, old-k8s-version-20211231102602-6736     Node old-k8s-version-20211231102602-6736 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m2s                   kube-proxy, old-k8s-version-20211231102602-6736  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 da bc c6 d3 d8 08 06
	[  +0.905080] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vetha337dd59
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 d5 c5 d8 9b b7 08 06
	[  +0.767220] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethad0df3e8
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 45 74 46 b2 ed 08 06
	[  +0.175235] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethfa30cc3c
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 f5 8d 27 39 52 08 06
	[  +2.267937] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.022406] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.024047] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.951751] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.011878] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023951] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.963808] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.003832] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023918] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	
	* 
	* ==> etcd [02f5bc6f1fdd0081ac22b1216606e6a5da1908f6dd8b37174cb86189c9245c90] <==
	* 2021-12-31 10:39:52.116707 I | etcdserver: starting member aec36adc501070cc in cluster fa54960ea34d58be
	2021-12-31 10:39:52.116757 I | raft: aec36adc501070cc became follower at term 0
	2021-12-31 10:39:52.116764 I | raft: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2021-12-31 10:39:52.116767 I | raft: aec36adc501070cc became follower at term 1
	2021-12-31 10:39:52.183063 W | auth: simple token is not cryptographically signed
	2021-12-31 10:39:52.187488 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2021-12-31 10:39:52.187856 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2021-12-31 10:39:52.188222 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-12-31 10:39:52.190334 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-12-31 10:39:52.190683 I | embed: listening for metrics on http://192.168.49.2:2381
	2021-12-31 10:39:52.190768 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-12-31 10:39:53.117162 I | raft: aec36adc501070cc is starting a new election at term 1
	2021-12-31 10:39:53.117224 I | raft: aec36adc501070cc became candidate at term 2
	2021-12-31 10:39:53.117263 I | raft: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	2021-12-31 10:39:53.117287 I | raft: aec36adc501070cc became leader at term 2
	2021-12-31 10:39:53.117298 I | raft: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-12-31 10:39:53.117591 I | etcdserver: published {Name:old-k8s-version-20211231102602-6736 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-12-31 10:39:53.117619 I | embed: ready to serve client requests
	2021-12-31 10:39:53.117650 I | etcdserver: setting up the initial cluster version to 3.3
	2021-12-31 10:39:53.117691 I | embed: ready to serve client requests
	2021-12-31 10:39:53.120604 I | embed: serving client requests on 127.0.0.1:2379
	2021-12-31 10:39:53.121294 N | etcdserver/membership: set the initial cluster version to 3.3
	2021-12-31 10:39:53.121579 I | embed: serving client requests on 192.168.49.2:2379
	2021-12-31 10:39:53.121964 I | etcdserver/api: enabled capabilities for version 3.3
	2021-12-31 10:40:16.985363 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-9zxjv\" " with result "range_response_count:1 size:1435" took too long (179.374899ms) to execute
	
	* 
	* ==> kernel <==
	*  10:44:19 up  1:26,  0 users,  load average: 2.07, 1.27, 1.85
	Linux old-k8s-version-20211231102602-6736 5.11.0-1023-gcp #25~20.04.1-Ubuntu SMP Mon Nov 15 15:54:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [360d04f1d2e497917d40c239dd1ddc12199edce8119e293dbbf9e16d2ff6195d] <==
	* I1231 10:39:59.244674       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1231 10:39:59.519729       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1231 10:39:59.520945       1 controller.go:606] quota admission added evaluator for: endpoints
	I1231 10:39:59.794557       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1231 10:40:00.611399       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1231 10:40:00.899270       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1231 10:40:01.180222       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1231 10:40:16.685733       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1231 10:40:16.716910       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1231 10:40:16.805178       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I1231 10:40:21.293506       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1231 10:40:21.293607       1 handler_proxy.go:99] no RequestInfo found in the context
	E1231 10:40:21.293688       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:40:21.293697       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1231 10:41:21.293994       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1231 10:41:21.294106       1 handler_proxy.go:99] no RequestInfo found in the context
	E1231 10:41:21.294175       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:41:21.294216       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1231 10:43:21.294532       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1231 10:43:21.294648       1 handler_proxy.go:99] no RequestInfo found in the context
	E1231 10:43:21.294727       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:43:21.294749       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [e488bccab2c374294dc4e1182a1f7c01461d03131238f7c50a0b1a3bc38b498d] <==
	* E1231 10:40:19.608083       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1231 10:40:19.608090       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"18cddee6-e602-4f4d-9648-f6af12da3e0d", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1231 10:40:19.608301       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"33562163-4423-46ae-8aa6-2e8d2611b80e", APIVersion:"apps/v1", ResourceVersion:"439", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1231 10:40:19.679316       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-766959b846" failed with pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1231 10:40:19.679329       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"33562163-4423-46ae-8aa6-2e8d2611b80e", APIVersion:"apps/v1", ResourceVersion:"439", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-766959b846-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1231 10:40:19.682299       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1231 10:40:19.682287       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"18cddee6-e602-4f4d-9648-f6af12da3e0d", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1231 10:40:19.702837       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-766959b846", UID:"33562163-4423-46ae-8aa6-2e8d2611b80e", APIVersion:"apps/v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-766959b846-br66c
	I1231 10:40:19.787164       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"18cddee6-e602-4f4d-9648-f6af12da3e0d", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-6b84985989-sdtqb
	I1231 10:40:20.319051       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-5b7b789f", UID:"103162bf-3c75-4430-b6aa-d13460f6d84e", APIVersion:"apps/v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-5b7b789f-vbdjk
	E1231 10:40:47.455706       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:40:49.211892       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:41:17.707570       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:41:21.213530       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:41:47.959282       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:41:53.215428       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:42:18.210914       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:42:25.217250       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:42:48.462650       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:42:57.219511       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:43:18.714522       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:43:29.221323       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:43:48.966250       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:44:01.223239       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:44:19.217878       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [260191439414cb571c50079bad300e6fcefc8412455207a3187344dc06e156e8] <==
	* W1231 10:40:17.793443       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1231 10:40:17.813322       1 node.go:135] Successfully retrieved node IP: 192.168.49.2
	I1231 10:40:17.813381       1 server_others.go:149] Using iptables Proxier.
	I1231 10:40:17.814012       1 server.go:529] Version: v1.16.0
	I1231 10:40:17.814913       1 config.go:313] Starting service config controller
	I1231 10:40:17.814953       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1231 10:40:17.819152       1 config.go:131] Starting endpoints config controller
	I1231 10:40:17.819190       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1231 10:40:17.915302       1 shared_informer.go:204] Caches are synced for service config 
	I1231 10:40:17.919428       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [b939ed1a80a1833f964e536cf3c9e9cdc859e60141643e37c72deb76b9c1a7d7] <==
	* E1231 10:39:56.485784       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1231 10:39:56.486236       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:39:56.487207       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1231 10:39:56.487313       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:39:56.487518       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:39:56.488908       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1231 10:39:56.489055       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1231 10:39:56.489075       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1231 10:39:56.490229       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:39:56.490565       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:39:56.491226       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1231 10:39:57.487195       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1231 10:39:57.488364       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:39:57.489386       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1231 10:39:57.490301       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:39:57.491544       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:39:57.492628       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1231 10:39:57.493751       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1231 10:39:57.496400       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1231 10:39:57.497358       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:39:57.499029       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:39:57.499226       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1231 10:40:16.986652       1 factory.go:585] pod is already present in the activeQ
	E1231 10:40:19.001392       1 factory.go:585] pod is already present in the activeQ
	E1231 10:40:19.791910       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-12-31 10:39:29 UTC, end at Fri 2021-12-31 10:44:19 UTC. --
	Dec 31 10:43:16 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:43:16.074187     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:43:21 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:43:21.075368     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:43:21 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:43:21.705834     927 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:43:21 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:43:21.705871     927 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:43:26 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:43:26.076350     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:43:31 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:43:31.077269     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:43:31 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:43:31.739018     927 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:43:31 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:43:31.739064     927 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:43:36 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:43:36.078063     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:43:41 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:43:41.078844     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:43:41 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:43:41.773520     927 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:43:41 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:43:41.773563     927 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:43:46 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:43:46.079758     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:43:51 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:43:51.082488     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:43:51 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:43:51.814123     927 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:43:51 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:43:51.814173     927 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:43:56 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:43:56.083646     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:44:01 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:44:01.084634     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:44:01 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:44:01.844397     927 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:44:01 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:44:01.844463     927 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:44:06 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:44:06.085530     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:44:11 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:44:11.086609     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:44:11 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:44:11.876763     927 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:44:11 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:44:11.876825     927 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:44:16 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:44:16.087475     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20211231102602-6736 -n old-k8s-version-20211231102602-6736
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20211231102602-6736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-5644d7b6d9-9zxjv metrics-server-5b7b789f-vbdjk storage-provisioner dashboard-metrics-scraper-6b84985989-sdtqb kubernetes-dashboard-766959b846-br66c
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20211231102602-6736 describe pod coredns-5644d7b6d9-9zxjv metrics-server-5b7b789f-vbdjk storage-provisioner dashboard-metrics-scraper-6b84985989-sdtqb kubernetes-dashboard-766959b846-br66c
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20211231102602-6736 describe pod coredns-5644d7b6d9-9zxjv metrics-server-5b7b789f-vbdjk storage-provisioner dashboard-metrics-scraper-6b84985989-sdtqb kubernetes-dashboard-766959b846-br66c: exit status 1 (79.956966ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-9zxjv" not found
	Error from server (NotFound): pods "metrics-server-5b7b789f-vbdjk" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6b84985989-sdtqb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-766959b846-br66c" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20211231102602-6736 describe pod coredns-5644d7b6d9-9zxjv metrics-server-5b7b789f-vbdjk storage-provisioner dashboard-metrics-scraper-6b84985989-sdtqb kubernetes-dashboard-766959b846-br66c: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (292.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (290.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20211231102953-6736 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.1
E1231 10:43:29.444460    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-20211231102953-6736 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.1: exit status 80 (4m48.366331287s)

                                                
                                                
-- stdout --
	* [embed-certs-20211231102953-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node embed-certs-20211231102953-6736 in cluster embed-certs-20211231102953-6736
	* Pulling base image ...
	* Restarting existing docker container for "embed-certs-20211231102953-6736" ...
	* Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	  - kubelet.global-housekeeping-interval=60m
	  - kubelet.housekeeping-interval=5m
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image kubernetesui/dashboard:v2.3.1
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image k8s.gcr.io/echoserver:1.4
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1231 10:43:22.844642  253675 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:43:22.844763  253675 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:43:22.844769  253675 out.go:310] Setting ErrFile to fd 2...
	I1231 10:43:22.844775  253675 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:43:22.844954  253675 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:43:22.845319  253675 out.go:304] Setting JSON to false
	I1231 10:43:22.847068  253675 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5157,"bootTime":1640942245,"procs":267,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:43:22.847193  253675 start.go:122] virtualization: kvm guest
	I1231 10:43:22.850701  253675 out.go:176] * [embed-certs-20211231102953-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:43:22.853129  253675 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:43:22.850948  253675 notify.go:174] Checking for updates...
	I1231 10:43:22.855641  253675 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:43:22.857638  253675 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:43:22.860223  253675 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:43:22.862455  253675 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:43:22.862933  253675 config.go:176] Loaded profile config "embed-certs-20211231102953-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:43:22.863367  253675 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:43:22.907454  253675 docker.go:132] docker version: linux-20.10.12
	I1231 10:43:22.907559  253675 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:43:23.010925  253675 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:43:22.94341606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:43:23.011080  253675 docker.go:237] overlay module found
	I1231 10:43:23.014207  253675 out.go:176] * Using the docker driver based on existing profile
	I1231 10:43:23.014243  253675 start.go:280] selected driver: docker
	I1231 10:43:23.014249  253675 start.go:795] validating driver "docker" against &{Name:embed-certs-20211231102953-6736 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true ku
belet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:43:23.014391  253675 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:43:23.014412  253675 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:43:23.014421  253675 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:43:23.014467  253675 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:43:23.014493  253675 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I1231 10:43:23.017136  253675 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:43:23.017884  253675 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:43:23.116838  253675 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:43:23.05133577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	W1231 10:43:23.116982  253675 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:43:23.117011  253675 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I1231 10:43:23.119638  253675 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:43:23.119774  253675 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1231 10:43:23.119804  253675 cni.go:93] Creating CNI manager for ""
	I1231 10:43:23.119812  253675 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:43:23.119829  253675 start_flags.go:298] config:
	{Name:embed-certs-20211231102953-6736 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop
:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:43:23.122399  253675 out.go:176] * Starting control plane node embed-certs-20211231102953-6736 in cluster embed-certs-20211231102953-6736
	I1231 10:43:23.122462  253675 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:43:23.124490  253675 out.go:176] * Pulling base image ...
	I1231 10:43:23.124541  253675 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:43:23.124581  253675 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4
	I1231 10:43:23.124590  253675 cache.go:57] Caching tarball of preloaded images
	I1231 10:43:23.124659  253675 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:43:23.124888  253675 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:43:23.124904  253675 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.1 on containerd
	I1231 10:43:23.125057  253675 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/config.json ...
	I1231 10:43:23.163843  253675 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:43:23.163872  253675 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:43:23.163888  253675 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:43:23.163917  253675 start.go:313] acquiring machines lock for embed-certs-20211231102953-6736: {Name:mk30ade561e73ed15bb546a531be6f54b6b9c072 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:43:23.164009  253675 start.go:317] acquired machines lock for "embed-certs-20211231102953-6736" in 74.119µs
	I1231 10:43:23.164031  253675 start.go:93] Skipping create...Using existing machine configuration
	I1231 10:43:23.164039  253675 fix.go:55] fixHost starting: 
	I1231 10:43:23.164295  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:43:23.199236  253675 fix.go:108] recreateIfNeeded on embed-certs-20211231102953-6736: state=Stopped err=<nil>
	W1231 10:43:23.199269  253675 fix.go:134] unexpected machine state, will restart: <nil>
	I1231 10:43:23.202320  253675 out.go:176] * Restarting existing docker container for "embed-certs-20211231102953-6736" ...
	I1231 10:43:23.202389  253675 cli_runner.go:133] Run: docker start embed-certs-20211231102953-6736
	I1231 10:43:23.625205  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:43:23.664982  253675 kic.go:420] container "embed-certs-20211231102953-6736" state is running.
	I1231 10:43:23.665431  253675 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211231102953-6736
	I1231 10:43:23.703812  253675 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/config.json ...
	I1231 10:43:23.704117  253675 machine.go:88] provisioning docker machine ...
	I1231 10:43:23.704144  253675 ubuntu.go:169] provisioning hostname "embed-certs-20211231102953-6736"
	I1231 10:43:23.704223  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:23.742698  253675 main.go:130] libmachine: Using SSH client type: native
	I1231 10:43:23.743011  253675 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I1231 10:43:23.743039  253675 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20211231102953-6736 && echo "embed-certs-20211231102953-6736" | sudo tee /etc/hostname
	I1231 10:43:23.743711  253675 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58602->127.0.0.1:49427: read: connection reset by peer
	I1231 10:43:26.891264  253675 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20211231102953-6736
	
	I1231 10:43:26.891349  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:26.930929  253675 main.go:130] libmachine: Using SSH client type: native
	I1231 10:43:26.931119  253675 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I1231 10:43:26.931150  253675 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20211231102953-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20211231102953-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20211231102953-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:43:27.068707  253675 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:43:27.068740  253675 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:43:27.068790  253675 ubuntu.go:177] setting up certificates
	I1231 10:43:27.068818  253675 provision.go:83] configureAuth start
	I1231 10:43:27.068869  253675 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211231102953-6736
	I1231 10:43:27.106090  253675 provision.go:138] copyHostCerts
	I1231 10:43:27.106158  253675 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:43:27.106172  253675 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:43:27.106233  253675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:43:27.106338  253675 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:43:27.106358  253675 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:43:27.106382  253675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:43:27.106444  253675 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:43:27.106453  253675 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:43:27.106472  253675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:43:27.106526  253675 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20211231102953-6736 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20211231102953-6736]
	I1231 10:43:27.255618  253675 provision.go:172] copyRemoteCerts
	I1231 10:43:27.255688  253675 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:43:27.255719  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:27.293465  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:43:27.393419  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:43:27.414701  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I1231 10:43:27.438482  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1231 10:43:27.461503  253675 provision.go:86] duration metric: configureAuth took 392.669293ms
	I1231 10:43:27.461542  253675 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:43:27.461744  253675 config.go:176] Loaded profile config "embed-certs-20211231102953-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:43:27.461759  253675 machine.go:91] provisioned docker machine in 3.757626792s
	I1231 10:43:27.461767  253675 start.go:267] post-start starting for "embed-certs-20211231102953-6736" (driver="docker")
	I1231 10:43:27.461773  253675 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:43:27.461808  253675 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:43:27.461836  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:27.504760  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:43:27.605497  253675 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:43:27.609459  253675 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:43:27.609488  253675 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:43:27.609499  253675 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:43:27.609505  253675 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:43:27.609516  253675 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:43:27.609580  253675 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:43:27.609669  253675 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:43:27.609751  253675 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:43:27.618322  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:43:27.638480  253675 start.go:270] post-start completed in 176.700691ms
	I1231 10:43:27.638544  253675 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:43:27.638578  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:27.678338  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:43:27.773349  253675 fix.go:57] fixHost completed within 4.609301543s
	I1231 10:43:27.773377  253675 start.go:80] releasing machines lock for "embed-certs-20211231102953-6736", held for 4.609356997s
	I1231 10:43:27.773448  253675 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20211231102953-6736
	I1231 10:43:27.808989  253675 ssh_runner.go:195] Run: systemctl --version
	I1231 10:43:27.809043  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:27.809080  253675 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:43:27.809149  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:43:27.849360  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:43:27.849710  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:43:27.941036  253675 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:43:27.971837  253675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:43:27.982942  253675 docker.go:158] disabling docker service ...
	I1231 10:43:27.983000  253675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:43:27.994466  253675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:43:28.005201  253675 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:43:28.084281  253675 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:43:28.165947  253675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:43:28.176963  253675 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:43:28.193845  253675 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1
fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10
KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9
kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:43:28.210061  253675 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:43:28.218904  253675 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:43:28.227395  253675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:43:28.309175  253675 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:43:28.390283  253675 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:43:28.390355  253675 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:43:28.396380  253675 start.go:458] Will wait 60s for crictl version
	I1231 10:43:28.396511  253675 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:43:28.426104  253675 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:43:28Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:43:39.474533  253675 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:43:39.501276  253675 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:43:39.501336  253675 ssh_runner.go:195] Run: containerd --version
	I1231 10:43:39.527002  253675 ssh_runner.go:195] Run: containerd --version
	I1231 10:43:39.551133  253675 out.go:176] * Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	I1231 10:43:39.551225  253675 cli_runner.go:133] Run: docker network inspect embed-certs-20211231102953-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:43:39.587623  253675 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1231 10:43:39.591336  253675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:43:39.604414  253675 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:43:39.606523  253675 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:43:39.608679  253675 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:43:39.608778  253675 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:43:39.608844  253675 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:43:39.634556  253675 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:43:39.634585  253675 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:43:39.634630  253675 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:43:39.662182  253675 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:43:39.662208  253675 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:43:39.662251  253675 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:43:39.687863  253675 cni.go:93] Creating CNI manager for ""
	I1231 10:43:39.687887  253675 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:43:39.687902  253675 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:43:39.687916  253675 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20211231102953-6736 NodeName:embed-certs-20211231102953-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientC
AFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:43:39.688044  253675 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20211231102953-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:43:39.688123  253675 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=embed-certs-20211231102953-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1231 10:43:39.688169  253675 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.1
	I1231 10:43:39.696210  253675 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:43:39.696312  253675 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:43:39.704267  253675 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (638 bytes)
	I1231 10:43:39.718589  253675 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1231 10:43:39.734360  253675 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2081 bytes)
	I1231 10:43:39.749132  253675 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:43:39.753026  253675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:43:39.764036  253675 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736 for IP: 192.168.58.2
	I1231 10:43:39.764162  253675 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:43:39.764206  253675 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:43:39.764332  253675 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/client.key
	I1231 10:43:39.764393  253675 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.key.cee25041
	I1231 10:43:39.764430  253675 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.key
	I1231 10:43:39.764535  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:43:39.764569  253675 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:43:39.764576  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:43:39.764600  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:43:39.764619  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:43:39.764640  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:43:39.764679  253675 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:43:39.765624  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:43:39.786589  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1231 10:43:39.806214  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:43:39.827437  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/embed-certs-20211231102953-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1231 10:43:39.847223  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:43:39.869717  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:43:39.892296  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:43:39.915269  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:43:39.940596  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:43:39.965015  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:43:39.987472  253675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:43:40.008065  253675 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:43:40.023700  253675 ssh_runner.go:195] Run: openssl version
	I1231 10:43:40.029648  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:43:40.038817  253675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:43:40.042994  253675 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:43:40.043064  253675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:43:40.049114  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:43:40.057598  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:43:40.067157  253675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:43:40.071141  253675 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:43:40.071208  253675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:43:40.077176  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:43:40.085041  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:43:40.093428  253675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:43:40.097387  253675 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:43:40.097447  253675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:43:40.102969  253675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:43:40.110890  253675 kubeadm.go:388] StartCluster: {Name:embed-certs-20211231102953-6736 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:embed-certs-20211231102953-6736 Namespace:default APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_read
y:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:43:40.110993  253675 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:43:40.111061  253675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:43:40.137789  253675 cri.go:87] found id: "c4090927d59b8d0231d9972079e3b14697c8f3127d96ddaed42ac933ada12239"
	I1231 10:43:40.137839  253675 cri.go:87] found id: "03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f"
	I1231 10:43:40.137847  253675 cri.go:87] found id: "a380c0d98153cc2ab6095cc8b6eaf0192543775721df6c9cb6c5e4e510d1636d"
	I1231 10:43:40.137854  253675 cri.go:87] found id: "7de9215e7da17128bb40a41f2f520ecf92f3eb1e98a7c3510a228471a97c4f2f"
	I1231 10:43:40.137861  253675 cri.go:87] found id: "bd3c847642a9fe6d581ed824b26b0b73d8344e03d63122f560cf62e61a262cb3"
	I1231 10:43:40.137868  253675 cri.go:87] found id: "4d064efc3679beff93c0b83a48f6fdc82cb98e282e856beb780502ca855801a3"
	I1231 10:43:40.137875  253675 cri.go:87] found id: "eb97b3087125dceba3df5197317e35791b4ca2f794effdfa4d5118543d9d3072"
	I1231 10:43:40.137883  253675 cri.go:87] found id: ""
	I1231 10:43:40.137935  253675 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1231 10:43:40.152791  253675 cri.go:114] JSON = null
	W1231 10:43:40.152840  253675 kubeadm.go:395] unpause failed: list paused: list returned 0 containers, but ps returned 7
	I1231 10:43:40.152904  253675 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:43:40.161320  253675 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1231 10:43:40.168524  253675 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:40.169402  253675 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20211231102953-6736" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:43:40.169777  253675 kubeconfig.go:127] "embed-certs-20211231102953-6736" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig - will repair!
	I1231 10:43:40.170362  253675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:43:40.172686  253675 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1231 10:43:40.180820  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:40.180878  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:40.195345  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:40.395812  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:40.395894  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:40.412496  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:40.595573  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:40.595660  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:40.610754  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:40.795993  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:40.796074  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:40.812031  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:40.996111  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:40.996181  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:41.011422  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:41.195687  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:41.195777  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:41.211094  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:41.396304  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:41.396402  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:41.413206  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:41.595459  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:41.595552  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:41.611792  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:41.796061  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:41.796162  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:41.811694  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:41.995913  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:41.995991  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:42.013353  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:42.195522  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:42.195645  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:42.212206  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:42.396496  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:42.396584  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:42.414476  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:42.595660  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:42.595748  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:42.611643  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:42.796314  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:42.797268  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:42.814486  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:42.995563  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:42.995659  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:43.011246  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:43.195463  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:43.195552  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:43.211623  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:43:43.211657  253675 api_server.go:165] Checking apiserver status ...
	I1231 10:43:43.211698  253675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:43:43.227133  253675 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W1231 10:43:43.227160  253675 kubeadm.go:600] needs reconfigure: apiserver error: timed out waiting for the condition
	I1231 10:43:43.227194  253675 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1231 10:43:43.962869  253675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:43:43.975076  253675 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:43:43.984301  253675 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:43:43.984358  253675 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:43:43.992612  253675 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:43:43.992653  253675 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:43:44.298681  253675 out.go:203]   - Generating certificates and keys ...
	I1231 10:43:45.481520  253675 out.go:203]   - Booting up control plane ...
	I1231 10:43:58.029054  253675 out.go:203]   - Configuring RBAC rules ...
	I1231 10:43:58.483326  253675 cni.go:93] Creating CNI manager for ""
	I1231 10:43:58.483357  253675 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:43:58.488726  253675 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:43:58.488827  253675 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:43:58.493521  253675 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.1/kubectl ...
	I1231 10:43:58.493558  253675 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:43:58.512259  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:43:59.192508  253675 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:43:59.192665  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:43:59.192694  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=embed-certs-20211231102953-6736 minikube.k8s.io/updated_at=2021_12_31T10_43_59_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:43:59.213675  253675 ops.go:34] apiserver oom_adj: -16
	I1231 10:43:59.322906  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:43:59.895946  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:00.395692  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:00.895655  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:01.395632  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:01.896412  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:02.395407  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:02.896452  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:03.396392  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:03.895366  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:04.396336  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:04.895859  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:05.395565  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:05.895587  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:06.395343  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:06.895271  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:07.395321  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:07.896038  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:08.396421  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:08.895347  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:09.395521  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:09.895830  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:10.395802  253675 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:44:10.501742  253675 kubeadm.go:864] duration metric: took 11.30912297s to wait for elevateKubeSystemPrivileges.
	I1231 10:44:10.501806  253675 kubeadm.go:390] StartCluster complete in 30.390935465s
	I1231 10:44:10.501833  253675 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:44:10.501996  253675 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:44:10.504119  253675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:44:11.026995  253675 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20211231102953-6736" rescaled to 1
	I1231 10:44:11.027076  253675 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:44:11.030066  253675 out.go:176] * Verifying Kubernetes components...
	I1231 10:44:11.027250  253675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:44:11.027529  253675 config.go:176] Loaded profile config "embed-certs-20211231102953-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:44:11.027546  253675 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I1231 10:44:11.030253  253675 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20211231102953-6736"
	I1231 10:44:11.030274  253675 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20211231102953-6736"
	W1231 10:44:11.030298  253675 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:44:11.030347  253675 host.go:66] Checking if "embed-certs-20211231102953-6736" exists ...
	I1231 10:44:11.030509  253675 addons.go:65] Setting dashboard=true in profile "embed-certs-20211231102953-6736"
	I1231 10:44:11.030533  253675 addons.go:153] Setting addon dashboard=true in "embed-certs-20211231102953-6736"
	I1231 10:44:11.030537  253675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W1231 10:44:11.030543  253675 addons.go:165] addon dashboard should already be in state true
	I1231 10:44:11.030572  253675 host.go:66] Checking if "embed-certs-20211231102953-6736" exists ...
	I1231 10:44:11.030638  253675 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20211231102953-6736"
	I1231 10:44:11.030651  253675 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20211231102953-6736"
	I1231 10:44:11.030970  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:44:11.031137  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:44:11.031253  253675 addons.go:65] Setting metrics-server=true in profile "embed-certs-20211231102953-6736"
	I1231 10:44:11.031280  253675 addons.go:153] Setting addon metrics-server=true in "embed-certs-20211231102953-6736"
	W1231 10:44:11.031288  253675 addons.go:165] addon metrics-server should already be in state true
	I1231 10:44:11.031157  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:44:11.031313  253675 host.go:66] Checking if "embed-certs-20211231102953-6736" exists ...
	I1231 10:44:11.031695  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:44:11.094017  253675 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1231 10:44:11.101006  253675 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1231 10:44:11.100849  253675 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20211231102953-6736" to be "Ready" ...
	I1231 10:44:11.102159  253675 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1231 10:44:11.112212  253675 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:44:11.109078  253675 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20211231102953-6736"
	I1231 10:44:11.109381  253675 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1231 10:44:11.109540  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	W1231 10:44:11.112312  253675 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:44:11.112367  253675 host.go:66] Checking if "embed-certs-20211231102953-6736" exists ...
	I1231 10:44:11.112422  253675 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:44:11.112434  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:44:11.112456  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1231 10:44:11.112471  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1231 10:44:11.112489  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:44:11.112497  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:44:11.112389  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:44:11.112946  253675 cli_runner.go:133] Run: docker container inspect embed-certs-20211231102953-6736 --format={{.State.Status}}
	I1231 10:44:11.168999  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:44:11.169017  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:44:11.169333  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:44:11.171810  253675 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:44:11.171837  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:44:11.171897  253675 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20211231102953-6736
	I1231 10:44:11.220524  253675 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:44:11.225393  253675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/embed-certs-20211231102953-6736/id_rsa Username:docker}
	I1231 10:44:11.379784  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1231 10:44:11.379826  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1231 10:44:11.383351  253675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:44:11.479751  253675 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1231 10:44:11.479789  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1231 10:44:11.481212  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1231 10:44:11.481233  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1231 10:44:11.581046  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1231 10:44:11.581120  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1231 10:44:11.582256  253675 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1231 10:44:11.582344  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1231 10:44:11.587594  253675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:44:11.679970  253675 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:44:11.680004  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1231 10:44:11.682163  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1231 10:44:11.682192  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1231 10:44:11.791172  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1231 10:44:11.791211  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1231 10:44:11.791675  253675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:44:11.895732  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1231 10:44:11.895775  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1231 10:44:11.995782  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1231 10:44:11.995814  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1231 10:44:12.085513  253675 start.go:773] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I1231 10:44:12.099705  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1231 10:44:12.099792  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1231 10:44:12.194606  253675 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:44:12.194725  253675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1231 10:44:12.297993  253675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:44:12.500547  253675 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.117122608s)
	I1231 10:44:12.995685  253675 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.203961788s)
	I1231 10:44:12.995735  253675 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20211231102953-6736"
	I1231 10:44:13.179949  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:14.102814  253675 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.804765707s)
	I1231 10:44:14.106088  253675 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1231 10:44:14.106137  253675 addons.go:417] enableAddons completed in 3.078602112s
	I1231 10:44:15.629711  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:17.630121  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:20.129390  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:22.129653  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:24.629739  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:27.129548  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:29.129623  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:31.130167  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:33.629367  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:35.630512  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:38.129072  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:40.129862  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:42.628942  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:44.630077  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:47.129721  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:49.629888  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:52.129324  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:54.129718  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:56.629788  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:44:59.129021  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:01.129651  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:03.629842  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:05.629877  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:08.128850  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:10.129558  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:12.629587  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:14.629796  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:17.129902  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:19.629779  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:22.129932  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:24.630806  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:27.130380  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:29.629652  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:31.629743  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:34.129422  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:36.629867  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:39.129103  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:41.129601  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:43.130375  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:45.628915  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:47.629608  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:50.129906  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:52.629755  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:55.129986  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:57.629109  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:45:59.630467  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:02.129432  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:04.629364  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:06.630805  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:09.129448  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:11.130044  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:13.130219  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:15.629571  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:18.129788  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:20.629221  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.629685  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:24.630080  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:27.131198  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:29.629674  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:31.629826  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:33.630236  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:35.630628  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:38.131557  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:40.628931  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:42.629291  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:44.629588  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:46.630233  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:49.129073  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:51.129976  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:53.130645  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:55.628936  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:57.630008  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:00.129798  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:02.629613  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:05.129931  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:07.629077  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:09.629227  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:12.129630  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:14.629484  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:17.129904  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:19.629766  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:21.630502  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:24.130302  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:26.629329  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:28.630092  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:30.630190  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:33.129099  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:35.129944  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:37.629071  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:39.629460  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:41.629639  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:44.129618  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:46.628987  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:48.629810  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:51.129720  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:53.629970  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:55.630008  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:58.129386  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:00.629184  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:02.630921  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:05.129367  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:07.629362  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:09.629727  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:11.132106  253675 node_ready.go:38] duration metric: took 4m0.023004433s waiting for node "embed-certs-20211231102953-6736" to be "Ready" ...
	I1231 10:48:11.135160  253675 out.go:176] 
	W1231 10:48:11.135314  253675 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:48:11.135327  253675 out.go:241] * 
	* 
	W1231 10:48:11.136204  253675 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1231 10:48:11.137882  253675 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-20211231102953-6736 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.1": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect embed-certs-20211231102953-6736
helpers_test.go:236: (dbg) docker inspect embed-certs-20211231102953-6736:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676",
	        "Created": "2021-12-31T10:30:07.254073431Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 253939,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-12-31T10:43:23.613074153Z",
	            "FinishedAt": "2021-12-31T10:43:22.23405709Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/hostname",
	        "HostsPath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/hosts",
	        "LogPath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676-json.log",
	        "Name": "/embed-certs-20211231102953-6736",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-20211231102953-6736:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20211231102953-6736",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e-init/diff:/var/lib/docker/overlay2/c534a67c1f4c5b9f1d9e8fda7874029a0b48633454e02a0b6c810aa9cf70f36b/diff:/var/lib/docker/overlay2/de7a9fc119b7ade7c9f3b24bea5108d56a2268af67d230db0cccbbf8190cc502/diff:/var/lib/docker/overlay2/7dde3b10db81d42c3216201bd8676123e0e230bfa173b31e053af8fa206a800e/diff:/var/lib/docker/overlay2/97454ec819da74666dcc72f8e5b4653886c44f3699cecf13a018ecf95d935273/diff:/var/lib/docker/overlay2/f63f64cf22a9591cbcafeb284a265882ea890544f2b083b747047bcf47a45f11/diff:/var/lib/docker/overlay2/85a2123491b64d14f7493b3bc6a519d9df28cc3680976b9e5a7ecf783be5a304/diff:/var/lib/docker/overlay2/2ec15009b968007c3be12325edca4c12821b52fc616ace66d456038c7f9b8b26/diff:/var/lib/docker/overlay2/c45e9285ae7064da0be5f07d29b035ad54f64e529db00ffe6f2dce6ad543dccf/diff:/var/lib/docker/overlay2/d95000893ca52351a7804b110df43c1656b4bc7fbe4940c9e0ff2744233b1f93/diff:/var/lib/docker/overlay2/38e3bf
07bc49f88e1e7214644d31095a6178f5156c6a5ff3fbcaf8676336a52d/diff:/var/lib/docker/overlay2/fe45a9a5036ea98542c5fd7808e66f8b6ffe9d0c545de700f7cbe39d6280e0e3/diff:/var/lib/docker/overlay2/8cd90b043b30c66cbeacbe4ca21881d85717c6e16f1fe627e2eecdeaf461da6c/diff:/var/lib/docker/overlay2/7221b66a068ae65d9a1c089fe5c60e7230aa950b3d47eb7e90a1776c2a45931b/diff:/var/lib/docker/overlay2/86ce10ecfb90cfd8e1f7d722715353b823ddfa58b93eefa2c1efffff899916ee/diff:/var/lib/docker/overlay2/12ea1316fade88c6e1d2272d6dbeb629875ff96f4dd6f6ead0ff6a4e7e3a3067/diff:/var/lib/docker/overlay2/0fe276873166ac11302901f1bb2c547c71e23fac2197e717e789b6ce7d1eb3ce/diff:/var/lib/docker/overlay2/81bd863acb94ff576a1f1f046ee87693930a12209e915a09f2a720c77d1357f3/diff:/var/lib/docker/overlay2/8875d87537d51e137e2fbff41fe61dadeefa46057e0c93d257ec4754b5ba95f3/diff:/var/lib/docker/overlay2/86e75ae4c708ad2e78d21326ca3b7f552b7903d8c7a0a66e086feaacd3a8c002/diff:/var/lib/docker/overlay2/c688ba4459332409ab260296b96d566d186d307b81308fb3b59724fe6fed3c7d/diff:/var/lib/d
ocker/overlay2/45c4b946a887c9de4f592591b3e65041083baba75d5bed9b0995339de1cafd02/diff:/var/lib/docker/overlay2/3927af3e4dc5e4af94ac76cbe5a3b9759b80e10646aabc655f5c9b56ae13e650/diff:/var/lib/docker/overlay2/11dd3efafe062edcbd4dd837545d570514492786b08abf77f57fe300d225eaa5/diff:/var/lib/docker/overlay2/b8e0f4d5369eb0c633092d9e21501d99ccb27fafaad6667209e1d988c337ae9b/diff:/var/lib/docker/overlay2/c605476b22b48b266163ca6ee5fcb52b15458f1d7a77458390109b6e795f72f3/diff:/var/lib/docker/overlay2/19b31873fbc8858f6b143ea8dc98572f32110c70c29732285bdea89dd6163287/diff:/var/lib/docker/overlay2/171afcaaccc199dc3a653d1bf42453959e7c6bd7bc1e8340f94454b5f5fb6d2d/diff:/var/lib/docker/overlay2/2000b0fa8d671b6e34ec28228440eb7e6a5ddcd6b2cb657836d8522e3fb4454e/diff:/var/lib/docker/overlay2/1294bf56384681b45b611fe07d7b32c6c87a67eee568452d1f8123bf319f7a4c/diff:/var/lib/docker/overlay2/3199bb762e90968c1998250099f781c0e8954f4d62df39a80a384b9208149421/diff:/var/lib/docker/overlay2/036a54cd701c84ee783285b30f6cf33b92858f1a11ebd5db15cffffea12
adcd9/diff:/var/lib/docker/overlay2/552a166f9d1f0d79a7c45c08db35c6db9366e5d8e1321b62ffc3d8a9ea458797/diff:/var/lib/docker/overlay2/98f34e7af675026bd6dd9aaf43b996b5a992e04fe33f238d16d38e7ccad2fb91/diff:/var/lib/docker/overlay2/c1ed97ca4514bd37713c82daa6ba805285f12b5aa51599e9f915b8755fbf55a7/diff:/var/lib/docker/overlay2/86b8ba00550442fc509be298ff07205e99a171d56a70a2e1735cdfd4a9ab745e/diff:/var/lib/docker/overlay2/109092c1424dfee38a4eb63e39a9bb5d3de4b011f84e30bd021bf73ee2f47aa1/diff:/var/lib/docker/overlay2/87fd60ac9e5dd5a701def2c0a9e1c64463bd4779be06a990c86e5561a4349312/diff:/var/lib/docker/overlay2/145545da7fd578da8a41adec4cec3a9b347b4e609c900ee9365ae7735c7859c6/diff:/var/lib/docker/overlay2/e6a20d33de71269404b7fdc1fa6f915dcbc5a337e0625ef4536cb942bb55c19b/diff:/var/lib/docker/overlay2/38829872ccac03d0dae45b93d1f34e61fd179b6fc19f22fd27d4431ee773090f/diff:/var/lib/docker/overlay2/3c7fa7cc3994b36d6b984ebe7c46b535829f02814130486c4ecf5a0976f4f7dc/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20211231102953-6736",
	                "Source": "/var/lib/docker/volumes/embed-certs-20211231102953-6736/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20211231102953-6736",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20211231102953-6736",
	                "name.minikube.sigs.k8s.io": "embed-certs-20211231102953-6736",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dfc73bddd32d4a580e80ede53e861c2019c40094c0f4bf8dbec95ea0223d20b5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49427"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49426"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49423"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49424"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dfc73bddd32d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20211231102953-6736": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "de3bee7bab0c",
	                        "embed-certs-20211231102953-6736"
	                    ],
	                    "NetworkID": "821d0d66bcf3a6ca41969ece76bf8b556f86e66628fb90783541e59bdec0e994",
	                    "EndpointID": "9ce1b9b9e6af1217d03fa31376bf39eb0632af0bb5247bc92fc3c48c1620d77a",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20211231102953-6736 -n embed-certs-20211231102953-6736
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20211231102953-6736 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-20211231102953-6736 logs -n 25: (1.033402121s)
helpers_test.go:253: TestStartStop/group/embed-certs/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable dashboard -p                                        | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:58 UTC | Fri, 31 Dec 2021 10:33:58 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20211231103230-6736 --memory=2200            | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:58 UTC | Fri, 31 Dec 2021 10:34:51 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:52 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:53 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                            | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:53 UTC | Fri, 31 Dec 2021 10:34:54 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:54 UTC | Fri, 31 Dec 2021 10:34:55 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:56 UTC | Fri, 31 Dec 2021 10:34:59 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:59 UTC | Fri, 31 Dec 2021 10:35:00 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736             | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:37:37 UTC | Fri, 31 Dec 2021 10:37:38 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:02 UTC | Fri, 31 Dec 2021 10:39:03 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:05 UTC | Fri, 31 Dec 2021 10:39:06 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:06 UTC | Fri, 31 Dec 2021 10:39:07 UTC |
	|         | old-k8s-version-20211231102602-6736                        |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:07 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736                        |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:28 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736                        |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                            | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:57 UTC | Fri, 31 Dec 2021 10:42:58 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                            | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:59 UTC | Fri, 31 Dec 2021 10:43:00 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:01 UTC | Fri, 31 Dec 2021 10:43:02 UTC |
	|         | embed-certs-20211231102953-6736                            |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:02 UTC | Fri, 31 Dec 2021 10:43:22 UTC |
	|         | embed-certs-20211231102953-6736                            |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:22 UTC | Fri, 31 Dec 2021 10:43:22 UTC |
	|         | embed-certs-20211231102953-6736                            |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:44:18 UTC | Fri, 31 Dec 2021 10:44:19 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736             | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:40 UTC | Fri, 31 Dec 2021 10:45:41 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736             | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:42 UTC | Fri, 31 Dec 2021 10:45:43 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:44 UTC | Fri, 31 Dec 2021 10:45:45 UTC |
	|         | default-k8s-different-port-20211231103230-6736             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:45 UTC | Fri, 31 Dec 2021 10:46:05 UTC |
	|         | default-k8s-different-port-20211231103230-6736             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:46:05 UTC | Fri, 31 Dec 2021 10:46:05 UTC |
	|         | default-k8s-different-port-20211231103230-6736             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/31 10:46:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.17.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1231 10:46:05.967243  259177 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:46:05.967352  259177 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:46:05.967357  259177 out.go:310] Setting ErrFile to fd 2...
	I1231 10:46:05.967361  259177 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:46:05.967469  259177 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:46:05.967735  259177 out.go:304] Setting JSON to false
	I1231 10:46:05.969104  259177 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5320,"bootTime":1640942245,"procs":269,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:46:05.969197  259177 start.go:122] virtualization: kvm guest
	I1231 10:46:05.973853  259177 out.go:176] * [default-k8s-different-port-20211231103230-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:46:05.974255  259177 notify.go:174] Checking for updates...
	I1231 10:46:05.977868  259177 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:46:05.981223  259177 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:46:05.983862  259177 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:05.986549  259177 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:46:05.989061  259177 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:46:05.990312  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:05.991362  259177 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:46:06.042846  259177 docker.go:132] docker version: linux-20.10.12
	I1231 10:46:06.042944  259177 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:46:06.151292  259177 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:46:06.079978563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:46:06.151415  259177 docker.go:237] overlay module found
	I1231 10:46:06.155844  259177 out.go:176] * Using the docker driver based on existing profile
	I1231 10:46:06.155884  259177 start.go:280] selected driver: docker
	I1231 10:46:06.155890  259177 start.go:795] validating driver "docker" against &{Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6
736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:tru
e default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:06.155998  259177 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:46:06.156009  259177 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:46:06.156018  259177 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:46:06.156049  259177 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:46:06.156072  259177 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:46:06.158635  259177 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:46:06.159406  259177 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:46:06.263549  259177 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:46:06.195249559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	W1231 10:46:06.263707  259177 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:46:06.263738  259177 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:46:06.266310  259177 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:46:06.266476  259177 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1231 10:46:06.266517  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:06.266529  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:06.266555  259177 start_flags.go:298] config:
	{Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] Star
tHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:06.269292  259177 out.go:176] * Starting control plane node default-k8s-different-port-20211231103230-6736 in cluster default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.269341  259177 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:46:06.271330  259177 out.go:176] * Pulling base image ...
	I1231 10:46:06.271382  259177 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:46:06.271427  259177 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4
	I1231 10:46:06.271440  259177 cache.go:57] Caching tarball of preloaded images
	I1231 10:46:06.271488  259177 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:46:06.271687  259177 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:46:06.271703  259177 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.1 on containerd
	I1231 10:46:06.271838  259177 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/config.json ...
	I1231 10:46:06.311882  259177 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:46:06.311924  259177 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:46:06.311934  259177 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:46:06.311964  259177 start.go:313] acquiring machines lock for default-k8s-different-port-20211231103230-6736: {Name:mk03f3b7e941bf8396158e471587c1c8924400ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:46:06.312088  259177 start.go:317] acquired machines lock for "default-k8s-different-port-20211231103230-6736" in 90µs
	I1231 10:46:06.312120  259177 start.go:93] Skipping create...Using existing machine configuration
	I1231 10:46:06.312127  259177 fix.go:55] fixHost starting: 
	I1231 10:46:06.312392  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:06.348944  259177 fix.go:108] recreateIfNeeded on default-k8s-different-port-20211231103230-6736: state=Stopped err=<nil>
	W1231 10:46:06.348985  259177 fix.go:134] unexpected machine state, will restart: <nil>
	I1231 10:46:04.629364  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:06.630805  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:06.352123  259177 out.go:176] * Restarting existing docker container for "default-k8s-different-port-20211231103230-6736" ...
	I1231 10:46:06.352210  259177 cli_runner.go:133] Run: docker start default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.813658  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:06.855625  259177 kic.go:420] container "default-k8s-different-port-20211231103230-6736" state is running.
	I1231 10:46:06.856072  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.893499  259177 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/config.json ...
	I1231 10:46:06.893751  259177 machine.go:88] provisioning docker machine ...
	I1231 10:46:06.893775  259177 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:06.893829  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.942482  259177 main.go:130] libmachine: Using SSH client type: native
	I1231 10:46:06.942732  259177 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I1231 10:46:06.942761  259177 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20211231103230-6736 && echo "default-k8s-different-port-20211231103230-6736" | sudo tee /etc/hostname
	I1231 10:46:06.944176  259177 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49220->127.0.0.1:49432: read: connection reset by peer
	I1231 10:46:10.090471  259177 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20211231103230-6736
	
	I1231 10:46:10.090566  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.126309  259177 main.go:130] libmachine: Using SSH client type: native
	I1231 10:46:10.126479  259177 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I1231 10:46:10.126500  259177 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20211231103230-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20211231103230-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20211231103230-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:46:10.265308  259177 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:46:10.265342  259177 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:46:10.265381  259177 ubuntu.go:177] setting up certificates
	I1231 10:46:10.265390  259177 provision.go:83] configureAuth start
	I1231 10:46:10.265438  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.304699  259177 provision.go:138] copyHostCerts
	I1231 10:46:10.304774  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:46:10.304797  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:46:10.304931  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:46:10.305111  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:46:10.305135  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:46:10.305188  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:46:10.305275  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:46:10.305288  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:46:10.305319  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:46:10.305369  259177 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20211231103230-6736 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20211231103230-6736]
	I1231 10:46:10.388915  259177 provision.go:172] copyRemoteCerts
	I1231 10:46:10.388990  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:46:10.389022  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.428508  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.524630  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:46:10.546323  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I1231 10:46:10.568067  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1231 10:46:10.589042  259177 provision.go:86] duration metric: configureAuth took 323.641763ms
	I1231 10:46:10.589074  259177 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:46:10.589309  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:10.589325  259177 machine.go:91] provisioned docker machine in 3.69555799s
	I1231 10:46:10.589336  259177 start.go:267] post-start starting for "default-k8s-different-port-20211231103230-6736" (driver="docker")
	I1231 10:46:10.589345  259177 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:46:10.589389  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:46:10.589435  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.626814  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.729130  259177 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:46:10.732760  259177 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:46:10.732791  259177 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:46:10.732799  259177 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:46:10.732804  259177 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:46:10.732813  259177 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:46:10.732873  259177 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:46:10.732955  259177 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:46:10.733065  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:46:10.741203  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:46:10.767998  259177 start.go:270] post-start completed in 178.644414ms
	I1231 10:46:10.768098  259177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:46:10.768143  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.814628  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:09.129448  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:11.130044  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:10.913687  259177 fix.go:57] fixHost completed within 4.601551226s
	I1231 10:46:10.913730  259177 start.go:80] releasing machines lock for "default-k8s-different-port-20211231103230-6736", held for 4.601625482s
	I1231 10:46:10.913830  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.952465  259177 ssh_runner.go:195] Run: systemctl --version
	I1231 10:46:10.952474  259177 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:46:10.952512  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.952544  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.995326  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.996790  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:11.093213  259177 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:46:11.117553  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:46:11.129631  259177 docker.go:158] disabling docker service ...
	I1231 10:46:11.129684  259177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:46:11.141035  259177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:46:11.151527  259177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:46:11.242410  259177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:46:11.332381  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:46:11.343726  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:46:11.360330  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:46:11.377258  259177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:46:11.386307  259177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:46:11.395035  259177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:46:11.484118  259177 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:46:11.570327  259177 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:46:11.570400  259177 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:46:11.575057  259177 start.go:458] Will wait 60s for crictl version
	I1231 10:46:11.575130  259177 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:46:11.606505  259177 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:46:11Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:46:13.130219  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:15.629571  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.654252  259177 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:46:22.679099  259177 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:46:22.679173  259177 ssh_runner.go:195] Run: containerd --version
	I1231 10:46:22.699642  259177 ssh_runner.go:195] Run: containerd --version
	I1231 10:46:22.724111  259177 out.go:176] * Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	I1231 10:46:22.724192  259177 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20211231103230-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:46:22.761739  259177 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1231 10:46:22.767723  259177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:46:18.129788  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:20.629221  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.629685  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.782449  259177 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:46:22.785523  259177 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:46:22.788150  259177 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:46:22.788429  259177 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:46:22.788579  259177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:46:22.817871  259177 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:46:22.817900  259177 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:46:22.817954  259177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:46:22.845121  259177 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:46:22.845143  259177 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:46:22.845184  259177 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:46:22.872122  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:22.872151  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:22.872177  259177 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:46:22.872197  259177 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.23.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20211231103230-6736 NodeName:default-k8s-different-port-20211231103230-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:46:22.872494  259177 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20211231103230-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:46:22.872594  259177 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=default-k8s-different-port-20211231103230-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1231 10:46:22.872650  259177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.1
	I1231 10:46:22.880952  259177 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:46:22.881023  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:46:22.889603  259177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (653 bytes)
	I1231 10:46:22.904645  259177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1231 10:46:22.919901  259177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1231 10:46:22.934205  259177 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:46:22.937824  259177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:46:22.948914  259177 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736 for IP: 192.168.67.2
	I1231 10:46:22.949067  259177 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:46:22.949116  259177 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:46:22.949182  259177 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/client.key
	I1231 10:46:22.949259  259177 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key.c7fa3a9e
	I1231 10:46:22.949302  259177 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.key
	I1231 10:46:22.949461  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:46:22.949511  259177 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:46:22.949519  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:46:22.949548  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:46:22.949577  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:46:22.949600  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:46:22.949638  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:46:22.950592  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:46:22.971400  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1231 10:46:22.991752  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:46:23.013577  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1231 10:46:23.034846  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:46:23.055398  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:46:23.077052  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:46:23.096743  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:46:23.116982  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:46:23.138029  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:46:23.158888  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:46:23.180681  259177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:46:23.196969  259177 ssh_runner.go:195] Run: openssl version
	I1231 10:46:23.202414  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:46:23.211307  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.215577  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.215648  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.221379  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:46:23.230004  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:46:23.240407  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.245087  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.245149  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.252376  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:46:23.261273  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:46:23.272187  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.276381  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.276439  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.282440  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:46:23.291931  259177 kubeadm.go:388] StartCluster: {Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true ex
tra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:23.292071  259177 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:46:23.292143  259177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:46:23.322595  259177 cri.go:87] found id: "2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29"
	I1231 10:46:23.322627  259177 cri.go:87] found id: "bd73d75d2e911560016e85d3108966fc37ee8b4b3657740aad59a2ef691ec869"
	I1231 10:46:23.322635  259177 cri.go:87] found id: "6f1fab877ff5d05fc062729318a8cb0acd49484779378aa54df1414cc54b604e"
	I1231 10:46:23.322641  259177 cri.go:87] found id: "631b3be24dd2b7667fa37162bef86df2c43e5cf7b5aac17096ee8f79bf749549"
	I1231 10:46:23.322646  259177 cri.go:87] found id: "a578cbf12a8e43444eec092845ea14c9a2f682965d287a9801a49dd0a7a8f11f"
	I1231 10:46:23.322652  259177 cri.go:87] found id: "78f1ab230e90175601afd355c7e642dc2f7730a2c82d7591eb54c79c0cd8fdcb"
	I1231 10:46:23.322658  259177 cri.go:87] found id: ""
	I1231 10:46:23.322704  259177 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1231 10:46:23.340383  259177 cri.go:114] JSON = null
	W1231 10:46:23.340448  259177 kubeadm.go:395] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I1231 10:46:23.340504  259177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:46:23.349236  259177 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1231 10:46:23.356926  259177 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.357908  259177 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20211231103230-6736" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:23.358387  259177 kubeconfig.go:127] "default-k8s-different-port-20211231103230-6736" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig - will repair!
	I1231 10:46:23.359199  259177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:23.361565  259177 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1231 10:46:23.369806  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.369877  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.387797  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.588254  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.588346  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.603506  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.788716  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.788802  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.805281  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.988608  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.988691  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.003485  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.188825  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.188911  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.204147  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.388346  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.388445  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.405511  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.587900  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.587979  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.603393  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.788613  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.788703  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.804658  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.988878  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.988986  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.004053  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.188400  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.188499  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.203668  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.388884  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.388958  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.404633  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.588928  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.588999  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.603675  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.787949  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.788030  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.803421  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.630080  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:27.131198  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:25.988170  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.988259  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.003489  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.188799  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.188874  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.206241  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.388554  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.388631  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.404946  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.404976  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.405015  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.421640  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W1231 10:46:26.421666  259177 kubeadm.go:600] needs reconfigure: apiserver error: timed out waiting for the condition
	I1231 10:46:26.421696  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1231 10:46:27.164679  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:46:27.176052  259177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:46:27.184966  259177 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:46:27.185023  259177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:46:27.192809  259177 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:46:27.192859  259177 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:46:29.629674  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:31.629826  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:33.630236  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:35.630628  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:41.573231  259177 out.go:203]   - Generating certificates and keys ...
	I1231 10:46:41.577004  259177 out.go:203]   - Booting up control plane ...
	I1231 10:46:41.580458  259177 out.go:203]   - Configuring RBAC rules ...
	I1231 10:46:41.582339  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:41.582359  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:38.131557  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:40.628931  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:42.629291  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:41.584853  259177 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:46:41.584972  259177 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:46:41.589132  259177 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.1/kubectl ...
	I1231 10:46:41.589160  259177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:46:41.603970  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:46:42.303042  259177 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:46:42.303113  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.303143  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=default-k8s-different-port-20211231103230-6736 minikube.k8s.io/updated_at=2021_12_31T10_46_42_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.323470  259177 ops.go:34] apiserver oom_adj: -16
	I1231 10:46:42.425794  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.997478  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:43.497994  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:43.997336  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.497461  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.997469  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:45.497141  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.629588  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:46.630233  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:45.996911  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:46.497554  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:46.996878  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:47.497472  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:47.997537  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:48.496954  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:48.997559  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.497163  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.997251  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:50.496922  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.129073  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:51.129976  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:50.997663  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:51.497167  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:51.996864  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:52.497527  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:52.997090  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:53.497799  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:53.997130  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:54.105111  259177 kubeadm.go:864] duration metric: took 11.802054034s to wait for elevateKubeSystemPrivileges.
	I1231 10:46:54.105148  259177 kubeadm.go:390] StartCluster complete in 30.81326231s
	I1231 10:46:54.105171  259177 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:54.105289  259177 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:54.108579  259177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:54.630591  259177 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20211231103230-6736" rescaled to 1
	I1231 10:46:54.630781  259177 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:46:54.634169  259177 out.go:176] * Verifying Kubernetes components...
	I1231 10:46:54.631217  259177 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I1231 10:46:54.634291  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:46:54.634321  259177 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634357  259177 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634365  259177 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:46:54.634362  259177 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.631547  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:54.631838  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:46:54.634371  259177 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634395  259177 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634410  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634414  259177 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634427  259177 addons.go:165] addon metrics-server should already be in state true
	I1231 10:46:54.634489  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634516  259177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634487  259177 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634558  259177 addons.go:165] addon dashboard should already be in state true
	I1231 10:46:54.634593  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634970  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635065  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635066  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635106  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.685830  259177 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20211231103230-6736" to be "Ready" ...
	I1231 10:46:54.699606  259177 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.699638  259177 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:46:54.699668  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.700169  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.704286  259177 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:46:54.704439  259177 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:46:54.707989  259177 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1231 10:46:54.704464  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:46:54.708151  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.708200  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1231 10:46:54.708461  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1231 10:46:54.708547  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.719755  259177 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1231 10:46:54.722183  259177 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1231 10:46:54.722299  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1231 10:46:54.722320  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1231 10:46:54.722396  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.765928  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.766391  259177 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:46:54.766411  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:46:54.766457  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.767368  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.784353  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.795209  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:46:54.825887  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:55.079913  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1231 10:46:55.079940  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1231 10:46:55.081041  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:46:55.101407  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:46:55.101636  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1231 10:46:55.101669  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1231 10:46:55.182759  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1231 10:46:55.182791  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1231 10:46:55.192507  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1231 10:46:55.192539  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1231 10:46:55.210370  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:46:55.210394  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1231 10:46:55.280215  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1231 10:46:55.280281  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1231 10:46:55.292923  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:46:55.303427  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1231 10:46:55.303453  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1231 10:46:55.390564  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1231 10:46:55.390603  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1231 10:46:55.488578  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1231 10:46:55.488607  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1231 10:46:55.493706  259177 start.go:773] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I1231 10:46:55.509144  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1231 10:46:55.509179  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1231 10:46:55.591504  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1231 10:46:55.591541  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1231 10:46:55.611957  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:46:55.611987  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1231 10:46:55.692647  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:46:56.280527  259177 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:56.694434  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:46:57.215361  259177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.522655734s)
	I1231 10:46:53.130645  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:55.628936  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:57.630008  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:57.218195  259177 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1231 10:46:57.218237  259177 addons.go:417] enableAddons completed in 2.587057511s
	I1231 10:46:59.194381  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:00.129798  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:02.629613  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:01.195147  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:03.195279  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:05.694625  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:05.129931  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:07.629077  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:08.193944  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:10.195307  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:09.629227  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:12.129630  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:12.695675  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:15.195343  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:14.629484  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:17.129904  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:17.693570  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:19.693815  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:19.629766  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:21.630502  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:21.694204  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:23.694541  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:25.694753  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:24.130302  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:26.629329  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:28.193820  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:30.694197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:28.630092  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:30.630190  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:32.694753  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:35.194657  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:33.129099  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:35.129944  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:37.629071  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:37.694883  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:40.194845  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:39.629460  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:41.629639  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:42.694778  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:45.194603  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:44.129618  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:46.628987  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:47.693566  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:49.694322  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:48.629810  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:51.129720  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:52.194896  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:54.694088  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:53.629970  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:55.630008  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:56.694160  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:58.694485  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:00.695020  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:58.129386  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:00.629184  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:02.630921  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:03.194853  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:05.195360  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:05.129367  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:07.629362  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:07.197249  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:09.693805  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:09.629727  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:11.132106  253675 node_ready.go:38] duration metric: took 4m0.023004433s waiting for node "embed-certs-20211231102953-6736" to be "Ready" ...
	I1231 10:48:11.135160  253675 out.go:176] 
	W1231 10:48:11.135314  253675 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:48:11.135327  253675 out.go:241] * 
	W1231 10:48:11.136204  253675 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	b37d7178b58ae       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   b2ae6303a3d29
	5dd5ec40a3137       6de166512aa22       4 minutes ago        Exited              kindnet-cni               0                   b2ae6303a3d29
	62c0d868eb022       b46c42588d511       4 minutes ago        Running             kube-proxy                0                   7d87514afca2c
	8896082530359       71d575efe6283       4 minutes ago        Running             kube-scheduler            1                   caee512e50be5
	bedf4fc421a5b       f51846a4fd288       4 minutes ago        Running             kube-controller-manager   1                   23a53efe4f57a
	5e4fcc10f62c1       b6d7abedde399       4 minutes ago        Running             kube-apiserver            1                   5526833e62549
	e2c98b3c8c237       25f8c7f3da61c       4 minutes ago        Running             etcd                      1                   9e5cd803bf1ec
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-12-31 10:43:23 UTC, end at Fri 2021-12-31 10:48:12 UTC. --
	Dec 31 10:44:11 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:44:11.212601326Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-bf6l7,Uid:5d4dbcfc-e2d6-453c-9ae2-f1cd8f5291f5,Namespace:kube-system,Attempt:0,}"
	Dec 31 10:44:11 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:44:11.297206542Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2ae6303a3d29623c4e30b092f55ba5537366b31656055b31b8b79a12dbe39ff pid=1782
	Dec 31 10:44:11 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:44:11.379524435Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d87514afca2cf45ef534e32358ea9af0cca6a8953a09d0cbb25cf047bf7c28f pid=1802
	Dec 31 10:44:11 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:44:11.684985615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bf6l7,Uid:5d4dbcfc-e2d6-453c-9ae2-f1cd8f5291f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d87514afca2cf45ef534e32358ea9af0cca6a8953a09d0cbb25cf047bf7c28f\""
	Dec 31 10:44:11 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:44:11.688737896Z" level=info msg="CreateContainer within sandbox \"7d87514afca2cf45ef534e32358ea9af0cca6a8953a09d0cbb25cf047bf7c28f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Dec 31 10:44:11 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:44:11.891601259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-sz9gt,Uid:a4e2bd9c-691b-43fc-99f8-6e269c1c58ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2ae6303a3d29623c4e30b092f55ba5537366b31656055b31b8b79a12dbe39ff\""
	Dec 31 10:44:11 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:44:11.907056861Z" level=info msg="CreateContainer within sandbox \"7d87514afca2cf45ef534e32358ea9af0cca6a8953a09d0cbb25cf047bf7c28f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"62c0d868eb022292299288a2d75ff0a1b7915bda2773f4a9103f725d6f43f491\""
	Dec 31 10:44:11 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:44:11.907861491Z" level=info msg="StartContainer for \"62c0d868eb022292299288a2d75ff0a1b7915bda2773f4a9103f725d6f43f491\""
	Dec 31 10:44:11 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:44:11.908433001Z" level=info msg="CreateContainer within sandbox \"b2ae6303a3d29623c4e30b092f55ba5537366b31656055b31b8b79a12dbe39ff\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Dec 31 10:44:12 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:44:12.094634337Z" level=info msg="CreateContainer within sandbox \"b2ae6303a3d29623c4e30b092f55ba5537366b31656055b31b8b79a12dbe39ff\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"5dd5ec40a3137b3e870091b9e7edbfc576476d6f38e979c077ba6c83deea198a\""
	Dec 31 10:44:12 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:44:12.095345966Z" level=info msg="StartContainer for \"5dd5ec40a3137b3e870091b9e7edbfc576476d6f38e979c077ba6c83deea198a\""
	Dec 31 10:44:12 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:44:12.405634043Z" level=info msg="StartContainer for \"62c0d868eb022292299288a2d75ff0a1b7915bda2773f4a9103f725d6f43f491\" returns successfully"
	Dec 31 10:44:12 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:44:12.593629882Z" level=info msg="StartContainer for \"5dd5ec40a3137b3e870091b9e7edbfc576476d6f38e979c077ba6c83deea198a\" returns successfully"
	Dec 31 10:45:03 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:45:03.484904888Z" level=error msg="ContainerStatus for \"03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"03de7bcc0efa5b14813c0fd8f61858d9f263aab8711cb23930e549d025b69e7f\": not found"
	Dec 31 10:45:03 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:45:03.485472479Z" level=error msg="ContainerStatus for \"c4090927d59b8d0231d9972079e3b14697c8f3127d96ddaed42ac933ada12239\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4090927d59b8d0231d9972079e3b14697c8f3127d96ddaed42ac933ada12239\": not found"
	Dec 31 10:45:03 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:45:03.485972988Z" level=error msg="ContainerStatus for \"a380c0d98153cc2ab6095cc8b6eaf0192543775721df6c9cb6c5e4e510d1636d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a380c0d98153cc2ab6095cc8b6eaf0192543775721df6c9cb6c5e4e510d1636d\": not found"
	Dec 31 10:46:52 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:46:52.912986839Z" level=info msg="Finish piping stderr of container \"5dd5ec40a3137b3e870091b9e7edbfc576476d6f38e979c077ba6c83deea198a\""
	Dec 31 10:46:52 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:46:52.913043005Z" level=info msg="Finish piping stdout of container \"5dd5ec40a3137b3e870091b9e7edbfc576476d6f38e979c077ba6c83deea198a\""
	Dec 31 10:46:52 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:46:52.914756364Z" level=info msg="TaskExit event &TaskExit{ContainerID:5dd5ec40a3137b3e870091b9e7edbfc576476d6f38e979c077ba6c83deea198a,ID:5dd5ec40a3137b3e870091b9e7edbfc576476d6f38e979c077ba6c83deea198a,Pid:2011,ExitStatus:2,ExitedAt:2021-12-31 10:46:52.914262546 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:46:52 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:46:52.950552247Z" level=info msg="shim disconnected" id=5dd5ec40a3137b3e870091b9e7edbfc576476d6f38e979c077ba6c83deea198a
	Dec 31 10:46:52 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:46:52.950674525Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:46:53 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:46:53.095409715Z" level=info msg="CreateContainer within sandbox \"b2ae6303a3d29623c4e30b092f55ba5537366b31656055b31b8b79a12dbe39ff\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Dec 31 10:46:53 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:46:53.124541502Z" level=info msg="CreateContainer within sandbox \"b2ae6303a3d29623c4e30b092f55ba5537366b31656055b31b8b79a12dbe39ff\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"b37d7178b58ae4192845c7c2d77ea5f32aac049f2de8ba2ecbf469e732d957ac\""
	Dec 31 10:46:53 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:46:53.128316775Z" level=info msg="StartContainer for \"b37d7178b58ae4192845c7c2d77ea5f32aac049f2de8ba2ecbf469e732d957ac\""
	Dec 31 10:46:53 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:46:53.208424974Z" level=info msg="StartContainer for \"b37d7178b58ae4192845c7c2d77ea5f32aac049f2de8ba2ecbf469e732d957ac\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20211231102953-6736
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20211231102953-6736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb
	                    minikube.k8s.io/name=embed-certs-20211231102953-6736
	                    minikube.k8s.io/updated_at=2021_12_31T10_43_59_0700
	                    minikube.k8s.io/version=v1.24.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 Dec 2021 10:43:55 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20211231102953-6736
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 Dec 2021 10:48:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 Dec 2021 10:44:10 +0000   Fri, 31 Dec 2021 10:43:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 Dec 2021 10:44:10 +0000   Fri, 31 Dec 2021 10:43:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 Dec 2021 10:44:10 +0000   Fri, 31 Dec 2021 10:43:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 31 Dec 2021 10:44:10 +0000   Fri, 31 Dec 2021 10:43:52 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20211231102953-6736
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	  System UUID:                df6948c7-cd35-4573-a0b7-f7c0ae501659
	  Boot ID:                    e6f4bca3-a61e-4e63-a68d-0c73a5c2c999
	  Kernel Version:             5.11.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.12
	  Kubelet Version:            v1.23.1
	  Kube-Proxy Version:         v1.23.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20211231102953-6736                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m9s
	  kube-system                 kindnet-sz9gt                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-embed-certs-20211231102953-6736             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-embed-certs-20211231102953-6736    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-proxy-bf6l7                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-embed-certs-20211231102953-6736             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 3m59s                  kube-proxy  
	  Normal  Starting                 4m22s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m22s (x4 over 4m22s)  kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x4 over 4m22s)  kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x3 over 4m22s)  kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m22s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 4m9s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s                   kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s                   kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s                   kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 da bc c6 d3 d8 08 06
	[  +0.905080] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vetha337dd59
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 d5 c5 d8 9b b7 08 06
	[  +0.767220] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethad0df3e8
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 45 74 46 b2 ed 08 06
	[  +0.175235] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethfa30cc3c
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 f5 8d 27 39 52 08 06
	[  +2.267937] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.022406] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.024047] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.951751] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.011878] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023951] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.963808] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.003832] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023918] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	
	* 
	* ==> etcd [e2c98b3c8c23748a45dcacedd39e95616ad8442e36bb6b6fda207f3c9cd41381] <==
	* {"level":"info","ts":"2021-12-31T10:43:51.996Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2021-12-31T10:43:51.990Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"b2c6679ac05f2cf1","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2021-12-31T10:43:52.009Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-12-31T10:43:52.010Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2021-12-31T10:43:52.010Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2021-12-31T10:43:52.010Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-12-31T10:43:52.010Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-12-31T10:43:52.380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2021-12-31T10:43:52.380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2021-12-31T10:43:52.380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2021-12-31T10:43:52.380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2021-12-31T10:43:52.380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2021-12-31T10:43:52.380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2021-12-31T10:43:52.380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2021-12-31T10:43:52.381Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:embed-certs-20211231102953-6736 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-12-31T10:43:52.383Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2021-12-31T10:43:52.384Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  10:48:12 up  1:30,  0 users,  load average: 0.77, 1.10, 1.64
	Linux embed-certs-20211231102953-6736 5.11.0-1023-gcp #25~20.04.1-Ubuntu SMP Mon Nov 15 15:54:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [5e4fcc10f62c1036c54aadc249c4bd994a626fae29d5142d5ec3303290197b95] <==
	* I1231 10:43:56.815060       1 controller.go:611] quota admission added evaluator for: endpoints
	I1231 10:43:56.819746       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1231 10:43:57.194843       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1231 10:43:58.285233       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1231 10:43:58.301309       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1231 10:43:58.390562       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1231 10:44:03.584842       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1231 10:44:10.706005       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1231 10:44:10.795889       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1231 10:44:12.805452       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1231 10:44:12.989440       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.110.95.150]
	W1231 10:44:13.609347       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 10:44:13.609428       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:44:13.609438       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1231 10:44:14.029672       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.107.143.237]
	I1231 10:44:14.096310       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.98.8.186]
	W1231 10:45:13.610564       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 10:45:13.610655       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:45:13.610668       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 10:47:13.610874       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 10:47:13.610957       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:47:13.610967       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [bedf4fc421a5b6eb6f42473129bdc4ccad70191cd113ec514f00efa708d19047] <==
	* I1231 10:44:13.786825       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E1231 10:44:13.791545       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-ccd587f44" failed with pods "kubernetes-dashboard-ccd587f44-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1231 10:44:13.791901       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-ccd587f44-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E1231 10:44:13.794772       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1231 10:44:13.794777       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E1231 10:44:13.796179       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-ccd587f44" failed with pods "kubernetes-dashboard-ccd587f44-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1231 10:44:13.796205       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-ccd587f44-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1231 10:44:13.896445       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-8ctl7"
	I1231 10:44:13.899332       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-ccd587f44-dwbwm"
	E1231 10:44:40.288596       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:44:40.708404       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:45:10.307685       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:45:10.726584       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:45:40.328515       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:45:40.744772       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:46:10.348526       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:46:10.772924       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:46:40.365547       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:46:40.790740       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:47:10.383188       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:47:10.807818       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:47:40.399610       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:47:40.824545       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:48:10.414006       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:48:10.839970       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [62c0d868eb022292299288a2d75ff0a1b7915bda2773f4a9103f725d6f43f491] <==
	* I1231 10:44:12.607400       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I1231 10:44:12.607485       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I1231 10:44:12.607591       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1231 10:44:12.801716       1 server_others.go:206] "Using iptables Proxier"
	I1231 10:44:12.802050       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1231 10:44:12.802136       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1231 10:44:12.802156       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1231 10:44:12.802606       1 server.go:656] "Version info" version="v1.23.1"
	I1231 10:44:12.803335       1 config.go:317] "Starting service config controller"
	I1231 10:44:12.803356       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1231 10:44:12.803492       1 config.go:226] "Starting endpoint slice config controller"
	I1231 10:44:12.803697       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1231 10:44:12.904704       1 shared_informer.go:247] Caches are synced for service config 
	I1231 10:44:12.907142       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [889608253035982887397c394e7ec41a7768efbf6a0e85f40e25bcc483a2df07] <==
	* W1231 10:43:55.191217       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1231 10:43:55.191299       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1231 10:43:55.193121       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1231 10:43:55.193371       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1231 10:43:55.193600       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:43:55.193679       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1231 10:43:55.194203       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:43:55.194299       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1231 10:43:55.194319       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1231 10:43:55.194336       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1231 10:43:55.194512       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:43:55.194595       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1231 10:43:55.194973       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:43:55.195211       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1231 10:43:55.195010       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1231 10:43:55.195470       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1231 10:43:56.133167       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1231 10:43:56.133248       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1231 10:43:56.183335       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1231 10:43:56.183397       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1231 10:43:56.356344       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1231 10:43:56.356384       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1231 10:43:56.381076       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:43:56.381112       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1231 10:43:59.289785       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-12-31 10:43:23 UTC, end at Fri 2021-12-31 10:48:12 UTC. --
	Dec 31 10:46:13 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:46:13.812872    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:46:18 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:46:18.813690    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:46:23 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:46:23.815313    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:46:28 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:46:28.816579    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:46:33 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:46:33.818165    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:46:38 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:46:38.819409    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:46:43 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:46:43.820605    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:46:48 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:46:48.822106    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:46:53 embed-certs-20211231102953-6736 kubelet[1376]: I1231 10:46:53.093141    1376 scope.go:110] "RemoveContainer" containerID="5dd5ec40a3137b3e870091b9e7edbfc576476d6f38e979c077ba6c83deea198a"
	Dec 31 10:46:53 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:46:53.823719    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:46:58 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:46:58.824980    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:47:03 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:47:03.826404    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:47:08 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:47:08.827452    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:47:13 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:47:13.828472    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:47:18 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:47:18.829992    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:47:23 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:47:23.830949    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:47:28 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:47:28.831917    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:47:33 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:47:33.833619    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:47:38 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:47:38.834894    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:47:43 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:47:43.836154    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:47:48 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:47:48.837168    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:47:53 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:47:53.838179    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:47:58 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:47:58.839484    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:48:03 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:48:03.841200    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:48:08 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:48:08.842921    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20211231102953-6736 -n embed-certs-20211231102953-6736
helpers_test.go:262: (dbg) Run:  kubectl --context embed-certs-20211231102953-6736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-64897985d-4t6g6 metrics-server-7f49dcbd7-fwqnh storage-provisioner dashboard-metrics-scraper-56974995fc-8ctl7 kubernetes-dashboard-ccd587f44-dwbwm
helpers_test.go:273: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context embed-certs-20211231102953-6736 describe pod coredns-64897985d-4t6g6 metrics-server-7f49dcbd7-fwqnh storage-provisioner dashboard-metrics-scraper-56974995fc-8ctl7 kubernetes-dashboard-ccd587f44-dwbwm
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20211231102953-6736 describe pod coredns-64897985d-4t6g6 metrics-server-7f49dcbd7-fwqnh storage-provisioner dashboard-metrics-scraper-56974995fc-8ctl7 kubernetes-dashboard-ccd587f44-dwbwm: exit status 1 (76.850269ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-4t6g6" not found
	Error from server (NotFound): pods "metrics-server-7f49dcbd7-fwqnh" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-8ctl7" not found
	Error from server (NotFound): pods "kubernetes-dashboard-ccd587f44-dwbwm" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context embed-certs-20211231102953-6736 describe pod coredns-64897985d-4t6g6 metrics-server-7f49dcbd7-fwqnh storage-provisioner dashboard-metrics-scraper-56974995fc-8ctl7 kubernetes-dashboard-ccd587f44-dwbwm: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (290.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-766959b846-br66c" [192fe7d8-9684-482f-9c73-819e68be9963] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
E1231 10:44:41.557229    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
E1231 10:44:53.360162    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
E1231 10:45:10.310053    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
E1231 10:45:22.104864    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory
E1231 10:45:36.756397    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:259: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20211231102602-6736 -n old-k8s-version-20211231102602-6736
start_stop_delete_test.go:259: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2021-12-31 10:53:20.847513058 +0000 UTC m=+4299.829329002
start_stop_delete_test.go:259: (dbg) Run:  kubectl --context old-k8s-version-20211231102602-6736 describe po kubernetes-dashboard-766959b846-br66c -n kubernetes-dashboard
start_stop_delete_test.go:259: (dbg) kubectl --context old-k8s-version-20211231102602-6736 describe po kubernetes-dashboard-766959b846-br66c -n kubernetes-dashboard:
Name:           kubernetes-dashboard-766959b846-br66c
Namespace:      kubernetes-dashboard
Priority:       0
Node:           <none>
Labels:         gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=766959b846
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/kubernetes-dashboard-766959b846
Containers:
kubernetes-dashboard:
Image:      kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e
Port:       9090/TCP
Host Port:  0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
Liveness:     http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:  <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-qmh7q (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kubernetes-dashboard-token-qmh7q:
Type:        Secret (a volume populated by a Secret)
SecretName:  kubernetes-dashboard-token-qmh7q
Optional:    false
QoS Class:       BestEffort
Node-Selectors:  beta.kubernetes.io/os=linux
Tolerations:     node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                From               Message
----     ------            ----               ----               -------
Warning  FailedScheduling  13m                default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
Warning  FailedScheduling  11m (x1 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
start_stop_delete_test.go:259: (dbg) Run:  kubectl --context old-k8s-version-20211231102602-6736 logs kubernetes-dashboard-766959b846-br66c -n kubernetes-dashboard
start_stop_delete_test.go:259: (dbg) kubectl --context old-k8s-version-20211231102602-6736 logs kubernetes-dashboard-766959b846-br66c -n kubernetes-dashboard:
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20211231102602-6736
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20211231102602-6736:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736",
	        "Created": "2021-12-31T10:26:13.51267746Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 248655,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-12-31T10:39:29.118004867Z",
	            "FinishedAt": "2021-12-31T10:39:27.710647386Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/hostname",
	        "HostsPath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/hosts",
	        "LogPath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736-json.log",
	        "Name": "/old-k8s-version-20211231102602-6736",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20211231102602-6736:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20211231102602-6736",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd-init/diff:/var/lib/docker/overlay2/c534a67c1f4c5b9f1d9e8fda7874029a0b48633454e02a0b6c810aa9cf70f36b/diff:/var/lib/docker/overlay2/de7a9fc119b7ade7c9f3b24bea5108d56a2268af67d230db0cccbbf8190cc502/diff:/var/lib/docker/overlay2/7dde3b10db81d42c3216201bd8676123e0e230bfa173b31e053af8fa206a800e/diff:/var/lib/docker/overlay2/97454ec819da74666dcc72f8e5b4653886c44f3699cecf13a018ecf95d935273/diff:/var/lib/docker/overlay2/f63f64cf22a9591cbcafeb284a265882ea890544f2b083b747047bcf47a45f11/diff:/var/lib/docker/overlay2/85a2123491b64d14f7493b3bc6a519d9df28cc3680976b9e5a7ecf783be5a304/diff:/var/lib/docker/overlay2/2ec15009b968007c3be12325edca4c12821b52fc616ace66d456038c7f9b8b26/diff:/var/lib/docker/overlay2/c45e9285ae7064da0be5f07d29b035ad54f64e529db00ffe6f2dce6ad543dccf/diff:/var/lib/docker/overlay2/d95000893ca52351a7804b110df43c1656b4bc7fbe4940c9e0ff2744233b1f93/diff:/var/lib/docker/overlay2/38e3bf
07bc49f88e1e7214644d31095a6178f5156c6a5ff3fbcaf8676336a52d/diff:/var/lib/docker/overlay2/fe45a9a5036ea98542c5fd7808e66f8b6ffe9d0c545de700f7cbe39d6280e0e3/diff:/var/lib/docker/overlay2/8cd90b043b30c66cbeacbe4ca21881d85717c6e16f1fe627e2eecdeaf461da6c/diff:/var/lib/docker/overlay2/7221b66a068ae65d9a1c089fe5c60e7230aa950b3d47eb7e90a1776c2a45931b/diff:/var/lib/docker/overlay2/86ce10ecfb90cfd8e1f7d722715353b823ddfa58b93eefa2c1efffff899916ee/diff:/var/lib/docker/overlay2/12ea1316fade88c6e1d2272d6dbeb629875ff96f4dd6f6ead0ff6a4e7e3a3067/diff:/var/lib/docker/overlay2/0fe276873166ac11302901f1bb2c547c71e23fac2197e717e789b6ce7d1eb3ce/diff:/var/lib/docker/overlay2/81bd863acb94ff576a1f1f046ee87693930a12209e915a09f2a720c77d1357f3/diff:/var/lib/docker/overlay2/8875d87537d51e137e2fbff41fe61dadeefa46057e0c93d257ec4754b5ba95f3/diff:/var/lib/docker/overlay2/86e75ae4c708ad2e78d21326ca3b7f552b7903d8c7a0a66e086feaacd3a8c002/diff:/var/lib/docker/overlay2/c688ba4459332409ab260296b96d566d186d307b81308fb3b59724fe6fed3c7d/diff:/var/lib/d
ocker/overlay2/45c4b946a887c9de4f592591b3e65041083baba75d5bed9b0995339de1cafd02/diff:/var/lib/docker/overlay2/3927af3e4dc5e4af94ac76cbe5a3b9759b80e10646aabc655f5c9b56ae13e650/diff:/var/lib/docker/overlay2/11dd3efafe062edcbd4dd837545d570514492786b08abf77f57fe300d225eaa5/diff:/var/lib/docker/overlay2/b8e0f4d5369eb0c633092d9e21501d99ccb27fafaad6667209e1d988c337ae9b/diff:/var/lib/docker/overlay2/c605476b22b48b266163ca6ee5fcb52b15458f1d7a77458390109b6e795f72f3/diff:/var/lib/docker/overlay2/19b31873fbc8858f6b143ea8dc98572f32110c70c29732285bdea89dd6163287/diff:/var/lib/docker/overlay2/171afcaaccc199dc3a653d1bf42453959e7c6bd7bc1e8340f94454b5f5fb6d2d/diff:/var/lib/docker/overlay2/2000b0fa8d671b6e34ec28228440eb7e6a5ddcd6b2cb657836d8522e3fb4454e/diff:/var/lib/docker/overlay2/1294bf56384681b45b611fe07d7b32c6c87a67eee568452d1f8123bf319f7a4c/diff:/var/lib/docker/overlay2/3199bb762e90968c1998250099f781c0e8954f4d62df39a80a384b9208149421/diff:/var/lib/docker/overlay2/036a54cd701c84ee783285b30f6cf33b92858f1a11ebd5db15cffffea12
adcd9/diff:/var/lib/docker/overlay2/552a166f9d1f0d79a7c45c08db35c6db9366e5d8e1321b62ffc3d8a9ea458797/diff:/var/lib/docker/overlay2/98f34e7af675026bd6dd9aaf43b996b5a992e04fe33f238d16d38e7ccad2fb91/diff:/var/lib/docker/overlay2/c1ed97ca4514bd37713c82daa6ba805285f12b5aa51599e9f915b8755fbf55a7/diff:/var/lib/docker/overlay2/86b8ba00550442fc509be298ff07205e99a171d56a70a2e1735cdfd4a9ab745e/diff:/var/lib/docker/overlay2/109092c1424dfee38a4eb63e39a9bb5d3de4b011f84e30bd021bf73ee2f47aa1/diff:/var/lib/docker/overlay2/87fd60ac9e5dd5a701def2c0a9e1c64463bd4779be06a990c86e5561a4349312/diff:/var/lib/docker/overlay2/145545da7fd578da8a41adec4cec3a9b347b4e609c900ee9365ae7735c7859c6/diff:/var/lib/docker/overlay2/e6a20d33de71269404b7fdc1fa6f915dcbc5a337e0625ef4536cb942bb55c19b/diff:/var/lib/docker/overlay2/38829872ccac03d0dae45b93d1f34e61fd179b6fc19f22fd27d4431ee773090f/diff:/var/lib/docker/overlay2/3c7fa7cc3994b36d6b984ebe7c46b535829f02814130486c4ecf5a0976f4f7dc/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20211231102602-6736",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20211231102602-6736/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20211231102602-6736",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20211231102602-6736",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20211231102602-6736",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7bcb66e32570c51223584d89c06c38407a807612f74bbcd0645dab033af753ae",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49422"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49421"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49418"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49419"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7bcb66e32570",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20211231102602-6736": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5984218b7d48",
	                        "old-k8s-version-20211231102602-6736"
	                    ],
	                    "NetworkID": "689da033f191c821bd60ad0334b0149b7450bc9a9e69f2e467eaea0327517488",
	                    "EndpointID": "b5fcec0b7d4b06090fe9be385801ede5fd25d0e4d16b5573d54b18438c62a2e6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20211231102602-6736 -n old-k8s-version-20211231102602-6736
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20211231102602-6736 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-20211231102602-6736 logs -n 25: (1.277567308s)
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| ssh     | -p                                                | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:52 UTC |
	|         | newest-cni-20211231103230-6736                    |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| pause   | -p                                                | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:53 UTC |
	|         | newest-cni-20211231103230-6736                    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:53 UTC | Fri, 31 Dec 2021 10:34:54 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| unpause | -p                                                | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:54 UTC | Fri, 31 Dec 2021 10:34:55 UTC |
	|         | newest-cni-20211231103230-6736                    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                |         |         |                               |                               |
	| delete  | -p                                                | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:56 UTC | Fri, 31 Dec 2021 10:34:59 UTC |
	|         | newest-cni-20211231103230-6736                    |                                                |         |         |                               |                               |
	| delete  | -p                                                | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:59 UTC | Fri, 31 Dec 2021 10:35:00 UTC |
	|         | newest-cni-20211231103230-6736                    |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:37:37 UTC | Fri, 31 Dec 2021 10:37:38 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:02 UTC | Fri, 31 Dec 2021 10:39:03 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:05 UTC | Fri, 31 Dec 2021 10:39:06 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:06 UTC | Fri, 31 Dec 2021 10:39:07 UTC |
	|         | old-k8s-version-20211231102602-6736               |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:07 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736               |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:28 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736               |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:57 UTC | Fri, 31 Dec 2021 10:42:58 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:59 UTC | Fri, 31 Dec 2021 10:43:00 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:01 UTC | Fri, 31 Dec 2021 10:43:02 UTC |
	|         | embed-certs-20211231102953-6736                   |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:02 UTC | Fri, 31 Dec 2021 10:43:22 UTC |
	|         | embed-certs-20211231102953-6736                   |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:22 UTC | Fri, 31 Dec 2021 10:43:22 UTC |
	|         | embed-certs-20211231102953-6736                   |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:44:18 UTC | Fri, 31 Dec 2021 10:44:19 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:40 UTC | Fri, 31 Dec 2021 10:45:41 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:42 UTC | Fri, 31 Dec 2021 10:45:43 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:44 UTC | Fri, 31 Dec 2021 10:45:45 UTC |
	|         | default-k8s-different-port-20211231103230-6736    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:45 UTC | Fri, 31 Dec 2021 10:46:05 UTC |
	|         | default-k8s-different-port-20211231103230-6736    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:46:05 UTC | Fri, 31 Dec 2021 10:46:05 UTC |
	|         | default-k8s-different-port-20211231103230-6736    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:48:11 UTC | Fri, 31 Dec 2021 10:48:12 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:50:55 UTC | Fri, 31 Dec 2021 10:50:56 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/31 10:46:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.17.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1231 10:46:05.967243  259177 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:46:05.967352  259177 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:46:05.967357  259177 out.go:310] Setting ErrFile to fd 2...
	I1231 10:46:05.967361  259177 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:46:05.967469  259177 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:46:05.967735  259177 out.go:304] Setting JSON to false
	I1231 10:46:05.969104  259177 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5320,"bootTime":1640942245,"procs":269,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:46:05.969197  259177 start.go:122] virtualization: kvm guest
	I1231 10:46:05.973853  259177 out.go:176] * [default-k8s-different-port-20211231103230-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:46:05.974255  259177 notify.go:174] Checking for updates...
	I1231 10:46:05.977868  259177 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:46:05.981223  259177 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:46:05.983862  259177 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:05.986549  259177 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:46:05.989061  259177 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:46:05.990312  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:05.991362  259177 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:46:06.042846  259177 docker.go:132] docker version: linux-20.10.12
	I1231 10:46:06.042944  259177 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:46:06.151292  259177 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:46:06.079978563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:46:06.151415  259177 docker.go:237] overlay module found
	I1231 10:46:06.155844  259177 out.go:176] * Using the docker driver based on existing profile
	I1231 10:46:06.155884  259177 start.go:280] selected driver: docker
	I1231 10:46:06.155890  259177 start.go:795] validating driver "docker" against &{Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6
736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:tru
e default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:06.155998  259177 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:46:06.156009  259177 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:46:06.156018  259177 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:46:06.156049  259177 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:46:06.156072  259177 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:46:06.158635  259177 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:46:06.159406  259177 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:46:06.263549  259177 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:46:06.195249559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	W1231 10:46:06.263707  259177 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:46:06.263738  259177 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:46:06.266310  259177 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:46:06.266476  259177 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1231 10:46:06.266517  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:06.266529  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:06.266555  259177 start_flags.go:298] config:
	{Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] Star
tHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:06.269292  259177 out.go:176] * Starting control plane node default-k8s-different-port-20211231103230-6736 in cluster default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.269341  259177 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:46:06.271330  259177 out.go:176] * Pulling base image ...
	I1231 10:46:06.271382  259177 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:46:06.271427  259177 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4
	I1231 10:46:06.271440  259177 cache.go:57] Caching tarball of preloaded images
	I1231 10:46:06.271488  259177 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:46:06.271687  259177 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:46:06.271703  259177 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.1 on containerd
	I1231 10:46:06.271838  259177 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/config.json ...
	I1231 10:46:06.311882  259177 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:46:06.311924  259177 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:46:06.311934  259177 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:46:06.311964  259177 start.go:313] acquiring machines lock for default-k8s-different-port-20211231103230-6736: {Name:mk03f3b7e941bf8396158e471587c1c8924400ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:46:06.312088  259177 start.go:317] acquired machines lock for "default-k8s-different-port-20211231103230-6736" in 90µs
	I1231 10:46:06.312120  259177 start.go:93] Skipping create...Using existing machine configuration
	I1231 10:46:06.312127  259177 fix.go:55] fixHost starting: 
	I1231 10:46:06.312392  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:06.348944  259177 fix.go:108] recreateIfNeeded on default-k8s-different-port-20211231103230-6736: state=Stopped err=<nil>
	W1231 10:46:06.348985  259177 fix.go:134] unexpected machine state, will restart: <nil>
	I1231 10:46:04.629364  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:06.630805  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:06.352123  259177 out.go:176] * Restarting existing docker container for "default-k8s-different-port-20211231103230-6736" ...
	I1231 10:46:06.352210  259177 cli_runner.go:133] Run: docker start default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.813658  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:06.855625  259177 kic.go:420] container "default-k8s-different-port-20211231103230-6736" state is running.
	I1231 10:46:06.856072  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.893499  259177 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/config.json ...
	I1231 10:46:06.893751  259177 machine.go:88] provisioning docker machine ...
	I1231 10:46:06.893775  259177 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:06.893829  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.942482  259177 main.go:130] libmachine: Using SSH client type: native
	I1231 10:46:06.942732  259177 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I1231 10:46:06.942761  259177 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20211231103230-6736 && echo "default-k8s-different-port-20211231103230-6736" | sudo tee /etc/hostname
	I1231 10:46:06.944176  259177 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49220->127.0.0.1:49432: read: connection reset by peer
	I1231 10:46:10.090471  259177 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20211231103230-6736
	
	I1231 10:46:10.090566  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.126309  259177 main.go:130] libmachine: Using SSH client type: native
	I1231 10:46:10.126479  259177 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I1231 10:46:10.126500  259177 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20211231103230-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20211231103230-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20211231103230-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:46:10.265308  259177 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:46:10.265342  259177 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:46:10.265381  259177 ubuntu.go:177] setting up certificates
	I1231 10:46:10.265390  259177 provision.go:83] configureAuth start
	I1231 10:46:10.265438  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.304699  259177 provision.go:138] copyHostCerts
	I1231 10:46:10.304774  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:46:10.304797  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:46:10.304931  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:46:10.305111  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:46:10.305135  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:46:10.305188  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:46:10.305275  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:46:10.305288  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:46:10.305319  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:46:10.305369  259177 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20211231103230-6736 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20211231103230-6736]
	I1231 10:46:10.388915  259177 provision.go:172] copyRemoteCerts
	I1231 10:46:10.388990  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:46:10.389022  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.428508  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.524630  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:46:10.546323  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I1231 10:46:10.568067  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1231 10:46:10.589042  259177 provision.go:86] duration metric: configureAuth took 323.641763ms
	I1231 10:46:10.589074  259177 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:46:10.589309  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:10.589325  259177 machine.go:91] provisioned docker machine in 3.69555799s
	I1231 10:46:10.589336  259177 start.go:267] post-start starting for "default-k8s-different-port-20211231103230-6736" (driver="docker")
	I1231 10:46:10.589345  259177 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:46:10.589389  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:46:10.589435  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.626814  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.729130  259177 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:46:10.732760  259177 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:46:10.732791  259177 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:46:10.732799  259177 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:46:10.732804  259177 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:46:10.732813  259177 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:46:10.732873  259177 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:46:10.732955  259177 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:46:10.733065  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:46:10.741203  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:46:10.767998  259177 start.go:270] post-start completed in 178.644414ms
	I1231 10:46:10.768098  259177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:46:10.768143  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.814628  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:09.129448  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:11.130044  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:10.913687  259177 fix.go:57] fixHost completed within 4.601551226s
	I1231 10:46:10.913730  259177 start.go:80] releasing machines lock for "default-k8s-different-port-20211231103230-6736", held for 4.601625482s
	I1231 10:46:10.913830  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.952465  259177 ssh_runner.go:195] Run: systemctl --version
	I1231 10:46:10.952474  259177 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:46:10.952512  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.952544  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.995326  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.996790  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:11.093213  259177 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:46:11.117553  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:46:11.129631  259177 docker.go:158] disabling docker service ...
	I1231 10:46:11.129684  259177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:46:11.141035  259177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:46:11.151527  259177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:46:11.242410  259177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:46:11.332381  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:46:11.343726  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:46:11.360330  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:46:11.377258  259177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:46:11.386307  259177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:46:11.395035  259177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:46:11.484118  259177 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:46:11.570327  259177 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:46:11.570400  259177 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:46:11.575057  259177 start.go:458] Will wait 60s for crictl version
	I1231 10:46:11.575130  259177 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:46:11.606505  259177 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:46:11Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:46:13.130219  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:15.629571  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.654252  259177 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:46:22.679099  259177 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:46:22.679173  259177 ssh_runner.go:195] Run: containerd --version
	I1231 10:46:22.699642  259177 ssh_runner.go:195] Run: containerd --version
	I1231 10:46:22.724111  259177 out.go:176] * Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	I1231 10:46:22.724192  259177 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20211231103230-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:46:22.761739  259177 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1231 10:46:22.767723  259177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:46:18.129788  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:20.629221  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.629685  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.782449  259177 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:46:22.785523  259177 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:46:22.788150  259177 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:46:22.788429  259177 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:46:22.788579  259177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:46:22.817871  259177 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:46:22.817900  259177 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:46:22.817954  259177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:46:22.845121  259177 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:46:22.845143  259177 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:46:22.845184  259177 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:46:22.872122  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:22.872151  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:22.872177  259177 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:46:22.872197  259177 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.23.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20211231103230-6736 NodeName:default-k8s-different-port-20211231103230-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:46:22.872494  259177 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20211231103230-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:46:22.872594  259177 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=default-k8s-different-port-20211231103230-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1231 10:46:22.872650  259177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.1
	I1231 10:46:22.880952  259177 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:46:22.881023  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:46:22.889603  259177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (653 bytes)
	I1231 10:46:22.904645  259177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1231 10:46:22.919901  259177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1231 10:46:22.934205  259177 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:46:22.937824  259177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:46:22.948914  259177 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736 for IP: 192.168.67.2
	I1231 10:46:22.949067  259177 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:46:22.949116  259177 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:46:22.949182  259177 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/client.key
	I1231 10:46:22.949259  259177 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key.c7fa3a9e
	I1231 10:46:22.949302  259177 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.key
	I1231 10:46:22.949461  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:46:22.949511  259177 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:46:22.949519  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:46:22.949548  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:46:22.949577  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:46:22.949600  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:46:22.949638  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:46:22.950592  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:46:22.971400  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1231 10:46:22.991752  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:46:23.013577  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1231 10:46:23.034846  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:46:23.055398  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:46:23.077052  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:46:23.096743  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:46:23.116982  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:46:23.138029  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:46:23.158888  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:46:23.180681  259177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:46:23.196969  259177 ssh_runner.go:195] Run: openssl version
	I1231 10:46:23.202414  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:46:23.211307  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.215577  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.215648  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.221379  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:46:23.230004  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:46:23.240407  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.245087  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.245149  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.252376  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:46:23.261273  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:46:23.272187  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.276381  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.276439  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.282440  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:46:23.291931  259177 kubeadm.go:388] StartCluster: {Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true ex
tra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:23.292071  259177 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:46:23.292143  259177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:46:23.322595  259177 cri.go:87] found id: "2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29"
	I1231 10:46:23.322627  259177 cri.go:87] found id: "bd73d75d2e911560016e85d3108966fc37ee8b4b3657740aad59a2ef691ec869"
	I1231 10:46:23.322635  259177 cri.go:87] found id: "6f1fab877ff5d05fc062729318a8cb0acd49484779378aa54df1414cc54b604e"
	I1231 10:46:23.322641  259177 cri.go:87] found id: "631b3be24dd2b7667fa37162bef86df2c43e5cf7b5aac17096ee8f79bf749549"
	I1231 10:46:23.322646  259177 cri.go:87] found id: "a578cbf12a8e43444eec092845ea14c9a2f682965d287a9801a49dd0a7a8f11f"
	I1231 10:46:23.322652  259177 cri.go:87] found id: "78f1ab230e90175601afd355c7e642dc2f7730a2c82d7591eb54c79c0cd8fdcb"
	I1231 10:46:23.322658  259177 cri.go:87] found id: ""
	I1231 10:46:23.322704  259177 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1231 10:46:23.340383  259177 cri.go:114] JSON = null
	W1231 10:46:23.340448  259177 kubeadm.go:395] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I1231 10:46:23.340504  259177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:46:23.349236  259177 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1231 10:46:23.356926  259177 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.357908  259177 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20211231103230-6736" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:23.358387  259177 kubeconfig.go:127] "default-k8s-different-port-20211231103230-6736" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig - will repair!
	I1231 10:46:23.359199  259177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:23.361565  259177 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1231 10:46:23.369806  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.369877  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.387797  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.588254  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.588346  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.603506  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.788716  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.788802  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.805281  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.988608  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.988691  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.003485  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.188825  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.188911  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.204147  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.388346  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.388445  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.405511  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.587900  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.587979  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.603393  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.788613  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.788703  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.804658  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.988878  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.988986  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.004053  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.188400  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.188499  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.203668  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.388884  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.388958  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.404633  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.588928  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.588999  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.603675  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.787949  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.788030  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.803421  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.630080  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:27.131198  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:25.988170  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.988259  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.003489  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.188799  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.188874  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.206241  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.388554  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.388631  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.404946  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.404976  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.405015  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.421640  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W1231 10:46:26.421666  259177 kubeadm.go:600] needs reconfigure: apiserver error: timed out waiting for the condition
	I1231 10:46:26.421696  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1231 10:46:27.164679  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:46:27.176052  259177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:46:27.184966  259177 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:46:27.185023  259177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:46:27.192809  259177 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:46:27.192859  259177 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:46:29.629674  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:31.629826  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:33.630236  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:35.630628  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:41.573231  259177 out.go:203]   - Generating certificates and keys ...
	I1231 10:46:41.577004  259177 out.go:203]   - Booting up control plane ...
	I1231 10:46:41.580458  259177 out.go:203]   - Configuring RBAC rules ...
	I1231 10:46:41.582339  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:41.582359  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:38.131557  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:40.628931  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:42.629291  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:41.584853  259177 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:46:41.584972  259177 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:46:41.589132  259177 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.1/kubectl ...
	I1231 10:46:41.589160  259177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:46:41.603970  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:46:42.303042  259177 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:46:42.303113  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.303143  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=default-k8s-different-port-20211231103230-6736 minikube.k8s.io/updated_at=2021_12_31T10_46_42_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.323470  259177 ops.go:34] apiserver oom_adj: -16
	I1231 10:46:42.425794  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.997478  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:43.497994  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:43.997336  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.497461  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.997469  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:45.497141  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.629588  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:46.630233  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:45.996911  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:46.497554  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:46.996878  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:47.497472  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:47.997537  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:48.496954  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:48.997559  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.497163  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.997251  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:50.496922  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.129073  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:51.129976  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:50.997663  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:51.497167  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:51.996864  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:52.497527  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:52.997090  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:53.497799  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:53.997130  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:54.105111  259177 kubeadm.go:864] duration metric: took 11.802054034s to wait for elevateKubeSystemPrivileges.
	I1231 10:46:54.105148  259177 kubeadm.go:390] StartCluster complete in 30.81326231s
	I1231 10:46:54.105171  259177 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:54.105289  259177 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:54.108579  259177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:54.630591  259177 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20211231103230-6736" rescaled to 1
	I1231 10:46:54.630781  259177 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:46:54.634169  259177 out.go:176] * Verifying Kubernetes components...
	I1231 10:46:54.631217  259177 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I1231 10:46:54.634291  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:46:54.634321  259177 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634357  259177 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634365  259177 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:46:54.634362  259177 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.631547  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:54.631838  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:46:54.634371  259177 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634395  259177 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634410  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634414  259177 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634427  259177 addons.go:165] addon metrics-server should already be in state true
	I1231 10:46:54.634489  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634516  259177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634487  259177 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634558  259177 addons.go:165] addon dashboard should already be in state true
	I1231 10:46:54.634593  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634970  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635065  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635066  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635106  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.685830  259177 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20211231103230-6736" to be "Ready" ...
	I1231 10:46:54.699606  259177 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.699638  259177 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:46:54.699668  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.700169  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.704286  259177 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:46:54.704439  259177 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:46:54.707989  259177 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1231 10:46:54.704464  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:46:54.708151  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.708200  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1231 10:46:54.708461  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1231 10:46:54.708547  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.719755  259177 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1231 10:46:54.722183  259177 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1231 10:46:54.722299  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1231 10:46:54.722320  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1231 10:46:54.722396  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.765928  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.766391  259177 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:46:54.766411  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:46:54.766457  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.767368  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.784353  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.795209  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:46:54.825887  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:55.079913  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1231 10:46:55.079940  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1231 10:46:55.081041  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:46:55.101407  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:46:55.101636  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1231 10:46:55.101669  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1231 10:46:55.182759  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1231 10:46:55.182791  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1231 10:46:55.192507  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1231 10:46:55.192539  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1231 10:46:55.210370  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:46:55.210394  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1231 10:46:55.280215  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1231 10:46:55.280281  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1231 10:46:55.292923  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:46:55.303427  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1231 10:46:55.303453  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1231 10:46:55.390564  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1231 10:46:55.390603  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1231 10:46:55.488578  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1231 10:46:55.488607  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1231 10:46:55.493706  259177 start.go:773] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I1231 10:46:55.509144  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1231 10:46:55.509179  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1231 10:46:55.591504  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1231 10:46:55.591541  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1231 10:46:55.611957  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:46:55.611987  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1231 10:46:55.692647  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:46:56.280527  259177 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:56.694434  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:46:57.215361  259177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.522655734s)
	I1231 10:46:53.130645  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:55.628936  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:57.630008  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:57.218195  259177 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1231 10:46:57.218237  259177 addons.go:417] enableAddons completed in 2.587057511s
	I1231 10:46:59.194381  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:00.129798  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:02.629613  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:01.195147  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:03.195279  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:05.694625  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:05.129931  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:07.629077  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:08.193944  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:10.195307  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:09.629227  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:12.129630  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:12.695675  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:15.195343  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:14.629484  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:17.129904  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:17.693570  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:19.693815  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:19.629766  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:21.630502  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:21.694204  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:23.694541  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:25.694753  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:24.130302  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:26.629329  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:28.193820  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:30.694197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:28.630092  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:30.630190  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:32.694753  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:35.194657  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:33.129099  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:35.129944  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:37.629071  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:37.694883  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:40.194845  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:39.629460  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:41.629639  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:42.694778  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:45.194603  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:44.129618  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:46.628987  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:47.693566  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:49.694322  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:48.629810  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:51.129720  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:52.194896  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:54.694088  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:53.629970  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:55.630008  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:56.694160  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:58.694485  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:00.695020  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:58.129386  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:00.629184  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:02.630921  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:03.194853  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:05.195360  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:05.129367  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:07.629362  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:07.197249  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:09.693805  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:09.629727  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:11.132106  253675 node_ready.go:38] duration metric: took 4m0.023004433s waiting for node "embed-certs-20211231102953-6736" to be "Ready" ...
	I1231 10:48:11.135160  253675 out.go:176] 
	W1231 10:48:11.135314  253675 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:48:11.135327  253675 out.go:241] * 
	W1231 10:48:11.136204  253675 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1231 10:48:11.694445  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:13.694502  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:15.695650  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:18.193879  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:20.194982  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:22.694025  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:24.695099  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:26.695542  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:29.193781  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:31.196683  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:33.693756  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:35.694396  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:37.694612  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:40.194187  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:42.194624  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:44.694809  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:47.194396  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:49.194998  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:51.693920  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:54.194059  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:56.194946  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:58.694370  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:01.194666  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:03.693794  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:05.693979  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:07.694832  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:10.193727  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:12.196818  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:14.694785  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:17.194197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:19.195220  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:21.693956  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:24.193839  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:26.194434  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:28.694344  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:31.194654  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:33.694374  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:36.194938  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:38.693771  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:40.694132  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:42.694270  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:44.694407  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:47.194463  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:49.194582  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:51.694082  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:53.694472  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:56.194226  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:58.194871  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:00.694067  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:02.694491  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:04.695370  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:07.193813  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:09.194857  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:11.693897  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:13.694405  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:15.695280  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:18.194197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:20.694632  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:23.193998  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:25.194494  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:27.194919  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:29.693725  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:31.694052  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:34.194397  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:36.695239  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:39.194905  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:41.693646  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:43.694494  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:46.193735  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:48.194245  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:50.694846  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:53.194883  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:54.696282  259177 node_ready.go:38] duration metric: took 4m0.010315548s waiting for node "default-k8s-different-port-20211231103230-6736" to be "Ready" ...
	I1231 10:50:54.698863  259177 out.go:176] 
	W1231 10:50:54.699054  259177 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:50:54.699075  259177 out.go:241] * 
	W1231 10:50:54.699945  259177 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	93fd192748f7b       6de166512aa22       52 seconds ago      Running             kindnet-cni               4                   8e9c4ebe9af8a
	577abf6458465       6de166512aa22       4 minutes ago       Exited              kindnet-cni               3                   8e9c4ebe9af8a
	260191439414c       c21b0c7400f98       13 minutes ago      Running             kube-proxy                0                   27df6e69859e8
	360d04f1d2e49       b305571ca60a5       13 minutes ago      Running             kube-apiserver            0                   9a26f93849781
	e488bccab2c37       06a629a7e51cd       13 minutes ago      Running             kube-controller-manager   0                   3c7deabb07da8
	02f5bc6f1fdd0       b2756210eeabf       13 minutes ago      Running             etcd                      0                   9d962dd90af06
	b939ed1a80a18       301ddc62b80b1       13 minutes ago      Running             kube-scheduler            0                   416af3a4e9b8c
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-12-31 10:39:29 UTC, end at Fri 2021-12-31 10:53:22 UTC. --
	Dec 31 10:45:53 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:45:53.871627070Z" level=info msg="CreateContainer within sandbox \"8e9c4ebe9af8a969d9754b6172d689da6fb7538703edbb7efc950442d5ff54ed\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"7e4501353cf7601b047e39573a01997087971f2b0488ab6585b45fb7fc74a2ae\""
	Dec 31 10:45:53 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:45:53.872226666Z" level=info msg="StartContainer for \"7e4501353cf7601b047e39573a01997087971f2b0488ab6585b45fb7fc74a2ae\""
	Dec 31 10:45:54 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:45:54.084852290Z" level=info msg="StartContainer for \"7e4501353cf7601b047e39573a01997087971f2b0488ab6585b45fb7fc74a2ae\" returns successfully"
	Dec 31 10:48:34 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:48:34.303536437Z" level=info msg="Finish piping stderr of container \"7e4501353cf7601b047e39573a01997087971f2b0488ab6585b45fb7fc74a2ae\""
	Dec 31 10:48:34 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:48:34.303554167Z" level=info msg="Finish piping stdout of container \"7e4501353cf7601b047e39573a01997087971f2b0488ab6585b45fb7fc74a2ae\""
	Dec 31 10:48:34 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:48:34.304401628Z" level=info msg="TaskExit event &TaskExit{ContainerID:7e4501353cf7601b047e39573a01997087971f2b0488ab6585b45fb7fc74a2ae,ID:7e4501353cf7601b047e39573a01997087971f2b0488ab6585b45fb7fc74a2ae,Pid:3383,ExitStatus:2,ExitedAt:2021-12-31 10:48:34.304044236 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:48:34 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:48:34.333151166Z" level=info msg="shim disconnected" id=7e4501353cf7601b047e39573a01997087971f2b0488ab6585b45fb7fc74a2ae
	Dec 31 10:48:34 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:48:34.333265934Z" level=error msg="copy shim log" error="read /proc/self/fd/79: file already closed"
	Dec 31 10:48:34 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:48:34.750521669Z" level=info msg="RemoveContainer for \"5901cfe67efb1a9a3756aa165a9fd6dcc93d517f15984cb489ba3da53d87e4e0\""
	Dec 31 10:48:34 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:48:34.762475932Z" level=info msg="RemoveContainer for \"5901cfe67efb1a9a3756aa165a9fd6dcc93d517f15984cb489ba3da53d87e4e0\" returns successfully"
	Dec 31 10:48:59 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:48:59.846921668Z" level=info msg="CreateContainer within sandbox \"8e9c4ebe9af8a969d9754b6172d689da6fb7538703edbb7efc950442d5ff54ed\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Dec 31 10:48:59 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:48:59.877084475Z" level=info msg="CreateContainer within sandbox \"8e9c4ebe9af8a969d9754b6172d689da6fb7538703edbb7efc950442d5ff54ed\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"577abf6458465acf657506041583ac700c5cd3ba5a9ce34f051a11cc9bc11ea1\""
	Dec 31 10:48:59 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:48:59.878119601Z" level=info msg="StartContainer for \"577abf6458465acf657506041583ac700c5cd3ba5a9ce34f051a11cc9bc11ea1\""
	Dec 31 10:49:00 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:49:00.087372411Z" level=info msg="StartContainer for \"577abf6458465acf657506041583ac700c5cd3ba5a9ce34f051a11cc9bc11ea1\" returns successfully"
	Dec 31 10:51:40 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:51:40.307464700Z" level=info msg="Finish piping stderr of container \"577abf6458465acf657506041583ac700c5cd3ba5a9ce34f051a11cc9bc11ea1\""
	Dec 31 10:51:40 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:51:40.307558354Z" level=info msg="Finish piping stdout of container \"577abf6458465acf657506041583ac700c5cd3ba5a9ce34f051a11cc9bc11ea1\""
	Dec 31 10:51:40 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:51:40.308457457Z" level=info msg="TaskExit event &TaskExit{ContainerID:577abf6458465acf657506041583ac700c5cd3ba5a9ce34f051a11cc9bc11ea1,ID:577abf6458465acf657506041583ac700c5cd3ba5a9ce34f051a11cc9bc11ea1,Pid:3818,ExitStatus:2,ExitedAt:2021-12-31 10:51:40.308105657 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:51:40 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:51:40.335513786Z" level=info msg="shim disconnected" id=577abf6458465acf657506041583ac700c5cd3ba5a9ce34f051a11cc9bc11ea1
	Dec 31 10:51:40 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:51:40.335599371Z" level=error msg="copy shim log" error="read /proc/self/fd/79: file already closed"
	Dec 31 10:51:41 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:51:41.033009181Z" level=info msg="RemoveContainer for \"7e4501353cf7601b047e39573a01997087971f2b0488ab6585b45fb7fc74a2ae\""
	Dec 31 10:51:41 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:51:41.040995444Z" level=info msg="RemoveContainer for \"7e4501353cf7601b047e39573a01997087971f2b0488ab6585b45fb7fc74a2ae\" returns successfully"
	Dec 31 10:52:29 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:52:29.847246847Z" level=info msg="CreateContainer within sandbox \"8e9c4ebe9af8a969d9754b6172d689da6fb7538703edbb7efc950442d5ff54ed\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Dec 31 10:52:29 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:52:29.869808674Z" level=info msg="CreateContainer within sandbox \"8e9c4ebe9af8a969d9754b6172d689da6fb7538703edbb7efc950442d5ff54ed\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"93fd192748f7b6ed147b4581f2c8f6d5acc20cea104643a3bde8e596b2fa04a2\""
	Dec 31 10:52:29 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:52:29.870874136Z" level=info msg="StartContainer for \"93fd192748f7b6ed147b4581f2c8f6d5acc20cea104643a3bde8e596b2fa04a2\""
	Dec 31 10:52:30 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:52:30.084954100Z" level=info msg="StartContainer for \"93fd192748f7b6ed147b4581f2c8f6d5acc20cea104643a3bde8e596b2fa04a2\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20211231102602-6736
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20211231102602-6736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb
	                    minikube.k8s.io/name=old-k8s-version-20211231102602-6736
	                    minikube.k8s.io/updated_at=2021_12_31T10_40_01_0700
	                    minikube.k8s.io/version=v1.24.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 Dec 2021 10:39:56 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 Dec 2021 10:52:56 +0000   Fri, 31 Dec 2021 10:39:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 Dec 2021 10:52:56 +0000   Fri, 31 Dec 2021 10:39:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 Dec 2021 10:52:56 +0000   Fri, 31 Dec 2021 10:39:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 31 Dec 2021 10:52:56 +0000   Fri, 31 Dec 2021 10:39:52 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    old-k8s-version-20211231102602-6736
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32879780Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32879780Ki
	 pods:               110
	System Info:
	 Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	 System UUID:                5a8cca94-3bdf-4013-adda-72ef27798431
	 Boot ID:                    e6f4bca3-a61e-4e63-a68d-0c73a5c2c999
	 Kernel Version:             5.11.0-1023-gcp
	 OS Image:                   Ubuntu 20.04.2 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.4.12
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-20211231102602-6736                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kindnet-wttrw                                                  100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                kube-apiserver-old-k8s-version-20211231102602-6736             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-controller-manager-old-k8s-version-20211231102602-6736    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-proxy-7nkns                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-scheduler-old-k8s-version-20211231102602-6736             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                             Message
	  ----    ------                   ----               ----                                             -------
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet, old-k8s-version-20211231102602-6736     Node old-k8s-version-20211231102602-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet, old-k8s-version-20211231102602-6736     Node old-k8s-version-20211231102602-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet, old-k8s-version-20211231102602-6736     Node old-k8s-version-20211231102602-6736 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kube-proxy, old-k8s-version-20211231102602-6736  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 da bc c6 d3 d8 08 06
	[  +0.905080] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vetha337dd59
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 d5 c5 d8 9b b7 08 06
	[  +0.767220] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethad0df3e8
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 45 74 46 b2 ed 08 06
	[  +0.175235] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethfa30cc3c
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 f5 8d 27 39 52 08 06
	[  +2.267937] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.022406] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.024047] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.951751] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.011878] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023951] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.963808] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.003832] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023918] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	
	* 
	* ==> etcd [02f5bc6f1fdd0081ac22b1216606e6a5da1908f6dd8b37174cb86189c9245c90] <==
	* 2021-12-31 10:39:52.116764 I | raft: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2021-12-31 10:39:52.116767 I | raft: aec36adc501070cc became follower at term 1
	2021-12-31 10:39:52.183063 W | auth: simple token is not cryptographically signed
	2021-12-31 10:39:52.187488 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2021-12-31 10:39:52.187856 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2021-12-31 10:39:52.188222 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-12-31 10:39:52.190334 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-12-31 10:39:52.190683 I | embed: listening for metrics on http://192.168.49.2:2381
	2021-12-31 10:39:52.190768 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-12-31 10:39:53.117162 I | raft: aec36adc501070cc is starting a new election at term 1
	2021-12-31 10:39:53.117224 I | raft: aec36adc501070cc became candidate at term 2
	2021-12-31 10:39:53.117263 I | raft: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	2021-12-31 10:39:53.117287 I | raft: aec36adc501070cc became leader at term 2
	2021-12-31 10:39:53.117298 I | raft: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-12-31 10:39:53.117591 I | etcdserver: published {Name:old-k8s-version-20211231102602-6736 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-12-31 10:39:53.117619 I | embed: ready to serve client requests
	2021-12-31 10:39:53.117650 I | etcdserver: setting up the initial cluster version to 3.3
	2021-12-31 10:39:53.117691 I | embed: ready to serve client requests
	2021-12-31 10:39:53.120604 I | embed: serving client requests on 127.0.0.1:2379
	2021-12-31 10:39:53.121294 N | etcdserver/membership: set the initial cluster version to 3.3
	2021-12-31 10:39:53.121579 I | embed: serving client requests on 192.168.49.2:2379
	2021-12-31 10:39:53.121964 I | etcdserver/api: enabled capabilities for version 3.3
	2021-12-31 10:40:16.985363 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-9zxjv\" " with result "range_response_count:1 size:1435" took too long (179.374899ms) to execute
	2021-12-31 10:49:53.137699 I | mvcc: store.index: compact 565
	2021-12-31 10:49:53.138706 I | mvcc: finished scheduled compaction at 565 (took 609.084µs)
	
	* 
	* ==> kernel <==
	*  10:53:22 up  1:35,  0 users,  load average: 0.74, 0.83, 1.35
	Linux old-k8s-version-20211231102602-6736 5.11.0-1023-gcp #25~20.04.1-Ubuntu SMP Mon Nov 15 15:54:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [360d04f1d2e497917d40c239dd1ddc12199edce8119e293dbbf9e16d2ff6195d] <==
	* I1231 10:45:57.214010       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1231 10:45:57.214116       1 handler_proxy.go:99] no RequestInfo found in the context
	E1231 10:45:57.214172       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:45:57.214211       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1231 10:47:57.214486       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1231 10:47:57.214616       1 handler_proxy.go:99] no RequestInfo found in the context
	E1231 10:47:57.214767       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:47:57.214794       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1231 10:49:57.217248       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1231 10:49:57.217348       1 handler_proxy.go:99] no RequestInfo found in the context
	E1231 10:49:57.217424       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:49:57.217446       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1231 10:50:57.217731       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1231 10:50:57.217845       1 handler_proxy.go:99] no RequestInfo found in the context
	E1231 10:50:57.217906       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:50:57.217920       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1231 10:52:57.218172       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1231 10:52:57.218266       1 handler_proxy.go:99] no RequestInfo found in the context
	E1231 10:52:57.218342       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:52:57.218361       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [e488bccab2c374294dc4e1182a1f7c01461d03131238f7c50a0b1a3bc38b498d] <==
	* E1231 10:46:50.477001       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:47:13.235305       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:47:20.729065       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:47:45.237123       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:47:50.980915       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:48:17.238911       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:48:21.233220       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:48:49.241473       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:48:51.484998       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:49:21.243560       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:49:21.736952       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1231 10:49:51.988756       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:49:53.245393       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:50:22.240508       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:50:25.247091       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:50:52.492545       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:50:57.248908       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:51:22.744271       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:51:29.250738       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:51:52.995606       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:52:01.252803       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:52:23.248409       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:52:33.254582       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:52:53.500540       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:53:05.256812       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [260191439414cb571c50079bad300e6fcefc8412455207a3187344dc06e156e8] <==
	* W1231 10:40:17.793443       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1231 10:40:17.813322       1 node.go:135] Successfully retrieved node IP: 192.168.49.2
	I1231 10:40:17.813381       1 server_others.go:149] Using iptables Proxier.
	I1231 10:40:17.814012       1 server.go:529] Version: v1.16.0
	I1231 10:40:17.814913       1 config.go:313] Starting service config controller
	I1231 10:40:17.814953       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1231 10:40:17.819152       1 config.go:131] Starting endpoints config controller
	I1231 10:40:17.819190       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1231 10:40:17.915302       1 shared_informer.go:204] Caches are synced for service config 
	I1231 10:40:17.919428       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [b939ed1a80a1833f964e536cf3c9e9cdc859e60141643e37c72deb76b9c1a7d7] <==
	* E1231 10:39:56.485784       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1231 10:39:56.486236       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:39:56.487207       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1231 10:39:56.487313       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:39:56.487518       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:39:56.488908       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1231 10:39:56.489055       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1231 10:39:56.489075       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1231 10:39:56.490229       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:39:56.490565       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:39:56.491226       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1231 10:39:57.487195       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1231 10:39:57.488364       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:39:57.489386       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1231 10:39:57.490301       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:39:57.491544       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:39:57.492628       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1231 10:39:57.493751       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1231 10:39:57.496400       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1231 10:39:57.497358       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:39:57.499029       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:39:57.499226       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1231 10:40:16.986652       1 factory.go:585] pod is already present in the activeQ
	E1231 10:40:19.001392       1 factory.go:585] pod is already present in the activeQ
	E1231 10:40:19.791910       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-12-31 10:39:29 UTC, end at Fri 2021-12-31 10:53:22 UTC. --
	Dec 31 10:52:21 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:52:21.175486     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:52:23 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:52:23.529296     927 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:52:23 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:52:23.529338     927 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:52:26 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:52:26.176455     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:52:31 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:52:31.177525     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:52:33 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:52:33.560956     927 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:52:33 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:52:33.560996     927 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:52:36 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:52:36.178443     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:52:41 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:52:41.179410     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:52:43 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:52:43.593050     927 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:52:43 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:52:43.593091     927 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:52:46 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:52:46.180370     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:52:51 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:52:51.181164     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:52:53 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:52:53.628916     927 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:52:53 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:52:53.628960     927 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:52:56 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:52:56.182251     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:53:01 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:53:01.183202     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:53:03 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:53:03.659993     927 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:53:03 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:53:03.660037     927 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:53:06 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:53:06.184065     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:53:11 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:53:11.184961     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:53:13 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:53:13.692075     927 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 10:53:13 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:53:13.692130     927 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 10:53:16 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:53:16.185842     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 10:53:21 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 10:53:21.186788     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20211231102602-6736 -n old-k8s-version-20211231102602-6736
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20211231102602-6736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-5644d7b6d9-9zxjv metrics-server-5b7b789f-vbdjk storage-provisioner dashboard-metrics-scraper-6b84985989-sdtqb kubernetes-dashboard-766959b846-br66c
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20211231102602-6736 describe pod coredns-5644d7b6d9-9zxjv metrics-server-5b7b789f-vbdjk storage-provisioner dashboard-metrics-scraper-6b84985989-sdtqb kubernetes-dashboard-766959b846-br66c
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20211231102602-6736 describe pod coredns-5644d7b6d9-9zxjv metrics-server-5b7b789f-vbdjk storage-provisioner dashboard-metrics-scraper-6b84985989-sdtqb kubernetes-dashboard-766959b846-br66c: exit status 1 (92.243929ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-9zxjv" not found
	Error from server (NotFound): pods "metrics-server-5b7b789f-vbdjk" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6b84985989-sdtqb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-766959b846-br66c" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20211231102602-6736 describe pod coredns-5644d7b6d9-9zxjv metrics-server-5b7b789f-vbdjk storage-provisioner dashboard-metrics-scraper-6b84985989-sdtqb kubernetes-dashboard-766959b846-br66c: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (291.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20211231103230-6736 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.1
E1231 10:46:13.450760    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory
E1231 10:46:59.313054    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory
E1231 10:47:22.315933    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
E1231 10:47:39.269651    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-different-port-20211231103230-6736 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.1: exit status 80 (4m48.813288433s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20211231103230-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node default-k8s-different-port-20211231103230-6736 in cluster default-k8s-different-port-20211231103230-6736
	* Pulling base image ...
	* Restarting existing docker container for "default-k8s-different-port-20211231103230-6736" ...
	* Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	  - kubelet.global-housekeeping-interval=60m
	  - kubelet.housekeeping-interval=5m
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	  - Using image kubernetesui/dashboard:v2.3.1
	  - Using image k8s.gcr.io/echoserver:1.4
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1231 10:46:05.967243  259177 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:46:05.967352  259177 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:46:05.967357  259177 out.go:310] Setting ErrFile to fd 2...
	I1231 10:46:05.967361  259177 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:46:05.967469  259177 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:46:05.967735  259177 out.go:304] Setting JSON to false
	I1231 10:46:05.969104  259177 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5320,"bootTime":1640942245,"procs":269,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:46:05.969197  259177 start.go:122] virtualization: kvm guest
	I1231 10:46:05.973853  259177 out.go:176] * [default-k8s-different-port-20211231103230-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:46:05.974255  259177 notify.go:174] Checking for updates...
	I1231 10:46:05.977868  259177 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:46:05.981223  259177 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:46:05.983862  259177 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:05.986549  259177 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:46:05.989061  259177 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:46:05.990312  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:05.991362  259177 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:46:06.042846  259177 docker.go:132] docker version: linux-20.10.12
	I1231 10:46:06.042944  259177 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:46:06.151292  259177 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:46:06.079978563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:46:06.151415  259177 docker.go:237] overlay module found
	I1231 10:46:06.155844  259177 out.go:176] * Using the docker driver based on existing profile
	I1231 10:46:06.155884  259177 start.go:280] selected driver: docker
	I1231 10:46:06.155890  259177 start.go:795] validating driver "docker" against &{Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6
736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:tru
e default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:06.155998  259177 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:46:06.156009  259177 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:46:06.156018  259177 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:46:06.156049  259177 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:46:06.156072  259177 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I1231 10:46:06.158635  259177 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:46:06.159406  259177 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:46:06.263549  259177 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:46:06.195249559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	W1231 10:46:06.263707  259177 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:46:06.263738  259177 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I1231 10:46:06.266310  259177 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:46:06.266476  259177 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1231 10:46:06.266517  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:06.266529  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:06.266555  259177 start_flags.go:298] config:
	{Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] Star
tHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:06.269292  259177 out.go:176] * Starting control plane node default-k8s-different-port-20211231103230-6736 in cluster default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.269341  259177 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:46:06.271330  259177 out.go:176] * Pulling base image ...
	I1231 10:46:06.271382  259177 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:46:06.271427  259177 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4
	I1231 10:46:06.271440  259177 cache.go:57] Caching tarball of preloaded images
	I1231 10:46:06.271488  259177 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:46:06.271687  259177 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:46:06.271703  259177 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.1 on containerd
	I1231 10:46:06.271838  259177 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/config.json ...
	I1231 10:46:06.311882  259177 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:46:06.311924  259177 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:46:06.311934  259177 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:46:06.311964  259177 start.go:313] acquiring machines lock for default-k8s-different-port-20211231103230-6736: {Name:mk03f3b7e941bf8396158e471587c1c8924400ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:46:06.312088  259177 start.go:317] acquired machines lock for "default-k8s-different-port-20211231103230-6736" in 90µs
	I1231 10:46:06.312120  259177 start.go:93] Skipping create...Using existing machine configuration
	I1231 10:46:06.312127  259177 fix.go:55] fixHost starting: 
	I1231 10:46:06.312392  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:06.348944  259177 fix.go:108] recreateIfNeeded on default-k8s-different-port-20211231103230-6736: state=Stopped err=<nil>
	W1231 10:46:06.348985  259177 fix.go:134] unexpected machine state, will restart: <nil>
	I1231 10:46:06.352123  259177 out.go:176] * Restarting existing docker container for "default-k8s-different-port-20211231103230-6736" ...
	I1231 10:46:06.352210  259177 cli_runner.go:133] Run: docker start default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.813658  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:06.855625  259177 kic.go:420] container "default-k8s-different-port-20211231103230-6736" state is running.
	I1231 10:46:06.856072  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.893499  259177 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/config.json ...
	I1231 10:46:06.893751  259177 machine.go:88] provisioning docker machine ...
	I1231 10:46:06.893775  259177 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:06.893829  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.942482  259177 main.go:130] libmachine: Using SSH client type: native
	I1231 10:46:06.942732  259177 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I1231 10:46:06.942761  259177 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20211231103230-6736 && echo "default-k8s-different-port-20211231103230-6736" | sudo tee /etc/hostname
	I1231 10:46:06.944176  259177 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49220->127.0.0.1:49432: read: connection reset by peer
	I1231 10:46:10.090471  259177 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20211231103230-6736
	
	I1231 10:46:10.090566  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.126309  259177 main.go:130] libmachine: Using SSH client type: native
	I1231 10:46:10.126479  259177 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I1231 10:46:10.126500  259177 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20211231103230-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20211231103230-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20211231103230-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:46:10.265308  259177 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:46:10.265342  259177 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:46:10.265381  259177 ubuntu.go:177] setting up certificates
	I1231 10:46:10.265390  259177 provision.go:83] configureAuth start
	I1231 10:46:10.265438  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.304699  259177 provision.go:138] copyHostCerts
	I1231 10:46:10.304774  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:46:10.304797  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:46:10.304931  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:46:10.305111  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:46:10.305135  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:46:10.305188  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:46:10.305275  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:46:10.305288  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:46:10.305319  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:46:10.305369  259177 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20211231103230-6736 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20211231103230-6736]
	I1231 10:46:10.388915  259177 provision.go:172] copyRemoteCerts
	I1231 10:46:10.388990  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:46:10.389022  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.428508  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.524630  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:46:10.546323  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I1231 10:46:10.568067  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1231 10:46:10.589042  259177 provision.go:86] duration metric: configureAuth took 323.641763ms
	I1231 10:46:10.589074  259177 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:46:10.589309  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:10.589325  259177 machine.go:91] provisioned docker machine in 3.69555799s
	I1231 10:46:10.589336  259177 start.go:267] post-start starting for "default-k8s-different-port-20211231103230-6736" (driver="docker")
	I1231 10:46:10.589345  259177 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:46:10.589389  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:46:10.589435  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.626814  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.729130  259177 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:46:10.732760  259177 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:46:10.732791  259177 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:46:10.732799  259177 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:46:10.732804  259177 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:46:10.732813  259177 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:46:10.732873  259177 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:46:10.732955  259177 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:46:10.733065  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:46:10.741203  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:46:10.767998  259177 start.go:270] post-start completed in 178.644414ms
	I1231 10:46:10.768098  259177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:46:10.768143  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.814628  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.913687  259177 fix.go:57] fixHost completed within 4.601551226s
	I1231 10:46:10.913730  259177 start.go:80] releasing machines lock for "default-k8s-different-port-20211231103230-6736", held for 4.601625482s
	I1231 10:46:10.913830  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.952465  259177 ssh_runner.go:195] Run: systemctl --version
	I1231 10:46:10.952474  259177 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:46:10.952512  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.952544  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.995326  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.996790  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:11.093213  259177 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:46:11.117553  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:46:11.129631  259177 docker.go:158] disabling docker service ...
	I1231 10:46:11.129684  259177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:46:11.141035  259177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:46:11.151527  259177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:46:11.242410  259177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:46:11.332381  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:46:11.343726  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:46:11.360330  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1
fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10
KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9
kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:46:11.377258  259177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:46:11.386307  259177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:46:11.395035  259177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:46:11.484118  259177 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:46:11.570327  259177 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:46:11.570400  259177 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:46:11.575057  259177 start.go:458] Will wait 60s for crictl version
	I1231 10:46:11.575130  259177 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:46:11.606505  259177 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:46:11Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:46:22.654252  259177 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:46:22.679099  259177 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:46:22.679173  259177 ssh_runner.go:195] Run: containerd --version
	I1231 10:46:22.699642  259177 ssh_runner.go:195] Run: containerd --version
	I1231 10:46:22.724111  259177 out.go:176] * Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	I1231 10:46:22.724192  259177 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20211231103230-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:46:22.761739  259177 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1231 10:46:22.767723  259177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:46:22.782449  259177 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:46:22.785523  259177 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:46:22.788150  259177 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:46:22.788429  259177 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:46:22.788579  259177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:46:22.817871  259177 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:46:22.817900  259177 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:46:22.817954  259177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:46:22.845121  259177 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:46:22.845143  259177 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:46:22.845184  259177 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:46:22.872122  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:22.872151  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:22.872177  259177 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:46:22.872197  259177 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.23.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20211231103230-6736 NodeName:default-k8s-different-port-20211231103230-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:46:22.872494  259177 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20211231103230-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:46:22.872594  259177 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=default-k8s-different-port-20211231103230-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1231 10:46:22.872650  259177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.1
	I1231 10:46:22.880952  259177 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:46:22.881023  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:46:22.889603  259177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (653 bytes)
	I1231 10:46:22.904645  259177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1231 10:46:22.919901  259177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1231 10:46:22.934205  259177 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:46:22.937824  259177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:46:22.948914  259177 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736 for IP: 192.168.67.2
	I1231 10:46:22.949067  259177 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:46:22.949116  259177 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:46:22.949182  259177 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/client.key
	I1231 10:46:22.949259  259177 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key.c7fa3a9e
	I1231 10:46:22.949302  259177 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.key
	I1231 10:46:22.949461  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:46:22.949511  259177 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:46:22.949519  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:46:22.949548  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:46:22.949577  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:46:22.949600  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:46:22.949638  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:46:22.950592  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:46:22.971400  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1231 10:46:22.991752  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:46:23.013577  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1231 10:46:23.034846  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:46:23.055398  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:46:23.077052  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:46:23.096743  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:46:23.116982  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:46:23.138029  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:46:23.158888  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:46:23.180681  259177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:46:23.196969  259177 ssh_runner.go:195] Run: openssl version
	I1231 10:46:23.202414  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:46:23.211307  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.215577  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.215648  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.221379  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:46:23.230004  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:46:23.240407  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.245087  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.245149  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.252376  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:46:23.261273  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:46:23.272187  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.276381  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.276439  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.282440  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:46:23.291931  259177 kubeadm.go:388] StartCluster: {Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true ex
tra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:23.292071  259177 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:46:23.292143  259177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:46:23.322595  259177 cri.go:87] found id: "2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29"
	I1231 10:46:23.322627  259177 cri.go:87] found id: "bd73d75d2e911560016e85d3108966fc37ee8b4b3657740aad59a2ef691ec869"
	I1231 10:46:23.322635  259177 cri.go:87] found id: "6f1fab877ff5d05fc062729318a8cb0acd49484779378aa54df1414cc54b604e"
	I1231 10:46:23.322641  259177 cri.go:87] found id: "631b3be24dd2b7667fa37162bef86df2c43e5cf7b5aac17096ee8f79bf749549"
	I1231 10:46:23.322646  259177 cri.go:87] found id: "a578cbf12a8e43444eec092845ea14c9a2f682965d287a9801a49dd0a7a8f11f"
	I1231 10:46:23.322652  259177 cri.go:87] found id: "78f1ab230e90175601afd355c7e642dc2f7730a2c82d7591eb54c79c0cd8fdcb"
	I1231 10:46:23.322658  259177 cri.go:87] found id: ""
	I1231 10:46:23.322704  259177 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1231 10:46:23.340383  259177 cri.go:114] JSON = null
	W1231 10:46:23.340448  259177 kubeadm.go:395] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I1231 10:46:23.340504  259177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:46:23.349236  259177 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1231 10:46:23.356926  259177 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.357908  259177 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20211231103230-6736" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:23.358387  259177 kubeconfig.go:127] "default-k8s-different-port-20211231103230-6736" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig - will repair!
	I1231 10:46:23.359199  259177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:23.361565  259177 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1231 10:46:23.369806  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.369877  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.387797  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.588254  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.588346  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.603506  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.788716  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.788802  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.805281  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.988608  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.988691  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.003485  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.188825  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.188911  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.204147  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.388346  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.388445  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.405511  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.587900  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.587979  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.603393  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.788613  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.788703  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.804658  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.988878  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.988986  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.004053  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.188400  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.188499  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.203668  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.388884  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.388958  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.404633  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.588928  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.588999  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.603675  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.787949  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.788030  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.803421  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.988170  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.988259  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.003489  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.188799  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.188874  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.206241  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.388554  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.388631  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.404946  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.404976  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.405015  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.421640  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W1231 10:46:26.421666  259177 kubeadm.go:600] needs reconfigure: apiserver error: timed out waiting for the condition
	I1231 10:46:26.421696  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1231 10:46:27.164679  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:46:27.176052  259177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:46:27.184966  259177 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:46:27.185023  259177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:46:27.192809  259177 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:46:27.192859  259177 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:46:41.573231  259177 out.go:203]   - Generating certificates and keys ...
	I1231 10:46:41.577004  259177 out.go:203]   - Booting up control plane ...
	I1231 10:46:41.580458  259177 out.go:203]   - Configuring RBAC rules ...
	I1231 10:46:41.582339  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:41.582359  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:41.584853  259177 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:46:41.584972  259177 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:46:41.589132  259177 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.1/kubectl ...
	I1231 10:46:41.589160  259177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:46:41.603970  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:46:42.303042  259177 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:46:42.303113  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.303143  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=default-k8s-different-port-20211231103230-6736 minikube.k8s.io/updated_at=2021_12_31T10_46_42_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.323470  259177 ops.go:34] apiserver oom_adj: -16
	I1231 10:46:42.425794  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.997478  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:43.497994  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:43.997336  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.497461  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.997469  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:45.497141  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:45.996911  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:46.497554  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:46.996878  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:47.497472  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:47.997537  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:48.496954  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:48.997559  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.497163  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.997251  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:50.496922  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:50.997663  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:51.497167  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:51.996864  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:52.497527  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:52.997090  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:53.497799  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:53.997130  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:54.105111  259177 kubeadm.go:864] duration metric: took 11.802054034s to wait for elevateKubeSystemPrivileges.
	I1231 10:46:54.105148  259177 kubeadm.go:390] StartCluster complete in 30.81326231s
	I1231 10:46:54.105171  259177 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:54.105289  259177 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:54.108579  259177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:54.630591  259177 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20211231103230-6736" rescaled to 1
	I1231 10:46:54.630781  259177 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:46:54.634169  259177 out.go:176] * Verifying Kubernetes components...
	I1231 10:46:54.631217  259177 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I1231 10:46:54.634291  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:46:54.634321  259177 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634357  259177 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634365  259177 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:46:54.634362  259177 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.631547  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:54.631838  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:46:54.634371  259177 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634395  259177 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634410  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634414  259177 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634427  259177 addons.go:165] addon metrics-server should already be in state true
	I1231 10:46:54.634489  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634516  259177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634487  259177 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634558  259177 addons.go:165] addon dashboard should already be in state true
	I1231 10:46:54.634593  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634970  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635065  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635066  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635106  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.685830  259177 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20211231103230-6736" to be "Ready" ...
	I1231 10:46:54.699606  259177 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.699638  259177 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:46:54.699668  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.700169  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.704286  259177 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:46:54.704439  259177 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:46:54.707989  259177 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1231 10:46:54.704464  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:46:54.708151  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.708200  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1231 10:46:54.708461  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1231 10:46:54.708547  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.719755  259177 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1231 10:46:54.722183  259177 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1231 10:46:54.722299  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1231 10:46:54.722320  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1231 10:46:54.722396  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.765928  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.766391  259177 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:46:54.766411  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:46:54.766457  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.767368  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.784353  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.795209  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:46:54.825887  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:55.079913  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1231 10:46:55.079940  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1231 10:46:55.081041  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:46:55.101407  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:46:55.101636  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1231 10:46:55.101669  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1231 10:46:55.182759  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1231 10:46:55.182791  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1231 10:46:55.192507  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1231 10:46:55.192539  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1231 10:46:55.210370  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:46:55.210394  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1231 10:46:55.280215  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1231 10:46:55.280281  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1231 10:46:55.292923  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:46:55.303427  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1231 10:46:55.303453  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1231 10:46:55.390564  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1231 10:46:55.390603  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1231 10:46:55.488578  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1231 10:46:55.488607  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1231 10:46:55.493706  259177 start.go:773] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I1231 10:46:55.509144  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1231 10:46:55.509179  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1231 10:46:55.591504  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1231 10:46:55.591541  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1231 10:46:55.611957  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:46:55.611987  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1231 10:46:55.692647  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:46:56.280527  259177 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:56.694434  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:46:57.215361  259177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.522655734s)
	I1231 10:46:57.218195  259177 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1231 10:46:57.218237  259177 addons.go:417] enableAddons completed in 2.587057511s
	I1231 10:46:59.194381  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:01.195147  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:03.195279  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:05.694625  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:08.193944  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:10.195307  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:12.695675  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:15.195343  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:17.693570  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:19.693815  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:21.694204  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:23.694541  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:25.694753  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:28.193820  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:30.694197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:32.694753  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:35.194657  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:37.694883  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:40.194845  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:42.694778  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:45.194603  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:47.693566  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:49.694322  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:52.194896  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:54.694088  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:56.694160  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:58.694485  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:00.695020  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:03.194853  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:05.195360  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:07.197249  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:09.693805  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:11.694445  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:13.694502  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:15.695650  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:18.193879  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:20.194982  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:22.694025  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:24.695099  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:26.695542  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:29.193781  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:31.196683  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:33.693756  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:35.694396  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:37.694612  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:40.194187  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:42.194624  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:44.694809  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:47.194396  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:49.194998  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:51.693920  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:54.194059  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:56.194946  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:58.694370  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:01.194666  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:03.693794  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:05.693979  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:07.694832  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:10.193727  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:12.196818  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:14.694785  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:17.194197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:19.195220  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:21.693956  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:24.193839  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:26.194434  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:28.694344  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:31.194654  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:33.694374  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:36.194938  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:38.693771  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:40.694132  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:42.694270  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:44.694407  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:47.194463  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:49.194582  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:51.694082  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:53.694472  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:56.194226  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:58.194871  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:00.694067  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:02.694491  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:04.695370  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:07.193813  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:09.194857  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:11.693897  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:13.694405  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:15.695280  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:18.194197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:20.694632  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:23.193998  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:25.194494  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:27.194919  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:29.693725  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:31.694052  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:34.194397  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:36.695239  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:39.194905  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:41.693646  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:43.694494  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:46.193735  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:48.194245  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:50.694846  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:53.194883  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:54.696282  259177 node_ready.go:38] duration metric: took 4m0.010315548s waiting for node "default-k8s-different-port-20211231103230-6736" to be "Ready" ...
	I1231 10:50:54.698863  259177 out.go:176] 
	W1231 10:50:54.699054  259177 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:50:54.699075  259177 out.go:241] * 
	* 
	W1231 10:50:54.699945  259177 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1231 10:50:54.702819  259177 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:244: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-different-port-20211231103230-6736 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.1": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect default-k8s-different-port-20211231103230-6736
helpers_test.go:236: (dbg) docker inspect default-k8s-different-port-20211231103230-6736:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1",
	        "Created": "2021-12-31T10:32:50.365330019Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 259442,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-12-31T10:46:06.801649957Z",
	            "FinishedAt": "2021-12-31T10:46:05.341862453Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/hostname",
	        "HostsPath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/hosts",
	        "LogPath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1-json.log",
	        "Name": "/default-k8s-different-port-20211231103230-6736",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20211231103230-6736:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20211231103230-6736",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5-init/diff:/var/lib/docker/overlay2/c534a67c1f4c5b9f1d9e8fda7874029a0b48633454e02a0b6c810aa9cf70f36b/diff:/var/lib/docker/overlay2/de7a9fc119b7ade7c9f3b24bea5108d56a2268af67d230db0cccbbf8190cc502/diff:/var/lib/docker/overlay2/7dde3b10db81d42c3216201bd8676123e0e230bfa173b31e053af8fa206a800e/diff:/var/lib/docker/overlay2/97454ec819da74666dcc72f8e5b4653886c44f3699cecf13a018ecf95d935273/diff:/var/lib/docker/overlay2/f63f64cf22a9591cbcafeb284a265882ea890544f2b083b747047bcf47a45f11/diff:/var/lib/docker/overlay2/85a2123491b64d14f7493b3bc6a519d9df28cc3680976b9e5a7ecf783be5a304/diff:/var/lib/docker/overlay2/2ec15009b968007c3be12325edca4c12821b52fc616ace66d456038c7f9b8b26/diff:/var/lib/docker/overlay2/c45e9285ae7064da0be5f07d29b035ad54f64e529db00ffe6f2dce6ad543dccf/diff:/var/lib/docker/overlay2/d95000893ca52351a7804b110df43c1656b4bc7fbe4940c9e0ff2744233b1f93/diff:/var/lib/docker/overlay2/38e3bf
07bc49f88e1e7214644d31095a6178f5156c6a5ff3fbcaf8676336a52d/diff:/var/lib/docker/overlay2/fe45a9a5036ea98542c5fd7808e66f8b6ffe9d0c545de700f7cbe39d6280e0e3/diff:/var/lib/docker/overlay2/8cd90b043b30c66cbeacbe4ca21881d85717c6e16f1fe627e2eecdeaf461da6c/diff:/var/lib/docker/overlay2/7221b66a068ae65d9a1c089fe5c60e7230aa950b3d47eb7e90a1776c2a45931b/diff:/var/lib/docker/overlay2/86ce10ecfb90cfd8e1f7d722715353b823ddfa58b93eefa2c1efffff899916ee/diff:/var/lib/docker/overlay2/12ea1316fade88c6e1d2272d6dbeb629875ff96f4dd6f6ead0ff6a4e7e3a3067/diff:/var/lib/docker/overlay2/0fe276873166ac11302901f1bb2c547c71e23fac2197e717e789b6ce7d1eb3ce/diff:/var/lib/docker/overlay2/81bd863acb94ff576a1f1f046ee87693930a12209e915a09f2a720c77d1357f3/diff:/var/lib/docker/overlay2/8875d87537d51e137e2fbff41fe61dadeefa46057e0c93d257ec4754b5ba95f3/diff:/var/lib/docker/overlay2/86e75ae4c708ad2e78d21326ca3b7f552b7903d8c7a0a66e086feaacd3a8c002/diff:/var/lib/docker/overlay2/c688ba4459332409ab260296b96d566d186d307b81308fb3b59724fe6fed3c7d/diff:/var/lib/d
ocker/overlay2/45c4b946a887c9de4f592591b3e65041083baba75d5bed9b0995339de1cafd02/diff:/var/lib/docker/overlay2/3927af3e4dc5e4af94ac76cbe5a3b9759b80e10646aabc655f5c9b56ae13e650/diff:/var/lib/docker/overlay2/11dd3efafe062edcbd4dd837545d570514492786b08abf77f57fe300d225eaa5/diff:/var/lib/docker/overlay2/b8e0f4d5369eb0c633092d9e21501d99ccb27fafaad6667209e1d988c337ae9b/diff:/var/lib/docker/overlay2/c605476b22b48b266163ca6ee5fcb52b15458f1d7a77458390109b6e795f72f3/diff:/var/lib/docker/overlay2/19b31873fbc8858f6b143ea8dc98572f32110c70c29732285bdea89dd6163287/diff:/var/lib/docker/overlay2/171afcaaccc199dc3a653d1bf42453959e7c6bd7bc1e8340f94454b5f5fb6d2d/diff:/var/lib/docker/overlay2/2000b0fa8d671b6e34ec28228440eb7e6a5ddcd6b2cb657836d8522e3fb4454e/diff:/var/lib/docker/overlay2/1294bf56384681b45b611fe07d7b32c6c87a67eee568452d1f8123bf319f7a4c/diff:/var/lib/docker/overlay2/3199bb762e90968c1998250099f781c0e8954f4d62df39a80a384b9208149421/diff:/var/lib/docker/overlay2/036a54cd701c84ee783285b30f6cf33b92858f1a11ebd5db15cffffea12
adcd9/diff:/var/lib/docker/overlay2/552a166f9d1f0d79a7c45c08db35c6db9366e5d8e1321b62ffc3d8a9ea458797/diff:/var/lib/docker/overlay2/98f34e7af675026bd6dd9aaf43b996b5a992e04fe33f238d16d38e7ccad2fb91/diff:/var/lib/docker/overlay2/c1ed97ca4514bd37713c82daa6ba805285f12b5aa51599e9f915b8755fbf55a7/diff:/var/lib/docker/overlay2/86b8ba00550442fc509be298ff07205e99a171d56a70a2e1735cdfd4a9ab745e/diff:/var/lib/docker/overlay2/109092c1424dfee38a4eb63e39a9bb5d3de4b011f84e30bd021bf73ee2f47aa1/diff:/var/lib/docker/overlay2/87fd60ac9e5dd5a701def2c0a9e1c64463bd4779be06a990c86e5561a4349312/diff:/var/lib/docker/overlay2/145545da7fd578da8a41adec4cec3a9b347b4e609c900ee9365ae7735c7859c6/diff:/var/lib/docker/overlay2/e6a20d33de71269404b7fdc1fa6f915dcbc5a337e0625ef4536cb942bb55c19b/diff:/var/lib/docker/overlay2/38829872ccac03d0dae45b93d1f34e61fd179b6fc19f22fd27d4431ee773090f/diff:/var/lib/docker/overlay2/3c7fa7cc3994b36d6b984ebe7c46b535829f02814130486c4ecf5a0976f4f7dc/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20211231103230-6736",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20211231103230-6736/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20211231103230-6736",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20211231103230-6736",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20211231103230-6736",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c51c239834c0a79db122933a66bc297c5e82f8810e0ea189de2970c0af2302b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49432"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49431"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49428"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49430"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49429"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1c51c239834c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20211231103230-6736": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "282fb8467680",
	                        "default-k8s-different-port-20211231103230-6736"
	                    ],
	                    "NetworkID": "e1788769ca7736a71ee22c1f2c56bcd2d9ff496f9d3c2faac492c32b43c45e2f",
	                    "EndpointID": "b9fb2147bd35b928ce091697818409020532d192f5d386f126eee3cf42c8c85a",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20211231103230-6736 -n default-k8s-different-port-20211231103230-6736
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20211231103230-6736 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-different-port-20211231103230-6736 logs -n 25: (1.100760898s)
helpers_test.go:253: TestStartStop/group/default-k8s-different-port/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p newest-cni-20211231103230-6736 --memory=2200            | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:33:58 UTC | Fri, 31 Dec 2021 10:34:51 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.2-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:52 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:53 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                            | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:53 UTC | Fri, 31 Dec 2021 10:34:54 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:54 UTC | Fri, 31 Dec 2021 10:34:55 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:56 UTC | Fri, 31 Dec 2021 10:34:59 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:59 UTC | Fri, 31 Dec 2021 10:35:00 UTC |
	|         | newest-cni-20211231103230-6736                             |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736             | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:37:37 UTC | Fri, 31 Dec 2021 10:37:38 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:02 UTC | Fri, 31 Dec 2021 10:39:03 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:05 UTC | Fri, 31 Dec 2021 10:39:06 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:06 UTC | Fri, 31 Dec 2021 10:39:07 UTC |
	|         | old-k8s-version-20211231102602-6736                        |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:07 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736                        |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:28 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736                        |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                            | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:57 UTC | Fri, 31 Dec 2021 10:42:58 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                            | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:59 UTC | Fri, 31 Dec 2021 10:43:00 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:01 UTC | Fri, 31 Dec 2021 10:43:02 UTC |
	|         | embed-certs-20211231102953-6736                            |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:02 UTC | Fri, 31 Dec 2021 10:43:22 UTC |
	|         | embed-certs-20211231102953-6736                            |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:22 UTC | Fri, 31 Dec 2021 10:43:22 UTC |
	|         | embed-certs-20211231102953-6736                            |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736                        | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:44:18 UTC | Fri, 31 Dec 2021 10:44:19 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736             | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:40 UTC | Fri, 31 Dec 2021 10:45:41 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736             | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:42 UTC | Fri, 31 Dec 2021 10:45:43 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:44 UTC | Fri, 31 Dec 2021 10:45:45 UTC |
	|         | default-k8s-different-port-20211231103230-6736             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:45 UTC | Fri, 31 Dec 2021 10:46:05 UTC |
	|         | default-k8s-different-port-20211231103230-6736             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:46:05 UTC | Fri, 31 Dec 2021 10:46:05 UTC |
	|         | default-k8s-different-port-20211231103230-6736             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                            | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:48:11 UTC | Fri, 31 Dec 2021 10:48:12 UTC |
	|         | logs -n 25                                                 |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/31 10:46:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.17.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1231 10:46:05.967243  259177 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:46:05.967352  259177 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:46:05.967357  259177 out.go:310] Setting ErrFile to fd 2...
	I1231 10:46:05.967361  259177 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:46:05.967469  259177 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:46:05.967735  259177 out.go:304] Setting JSON to false
	I1231 10:46:05.969104  259177 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5320,"bootTime":1640942245,"procs":269,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:46:05.969197  259177 start.go:122] virtualization: kvm guest
	I1231 10:46:05.973853  259177 out.go:176] * [default-k8s-different-port-20211231103230-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:46:05.974255  259177 notify.go:174] Checking for updates...
	I1231 10:46:05.977868  259177 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:46:05.981223  259177 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:46:05.983862  259177 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:05.986549  259177 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:46:05.989061  259177 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:46:05.990312  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:05.991362  259177 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:46:06.042846  259177 docker.go:132] docker version: linux-20.10.12
	I1231 10:46:06.042944  259177 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:46:06.151292  259177 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:46:06.079978563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:46:06.151415  259177 docker.go:237] overlay module found
	I1231 10:46:06.155844  259177 out.go:176] * Using the docker driver based on existing profile
	I1231 10:46:06.155884  259177 start.go:280] selected driver: docker
	I1231 10:46:06.155890  259177 start.go:795] validating driver "docker" against &{Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6
736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:tru
e default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:06.155998  259177 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:46:06.156009  259177 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:46:06.156018  259177 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:46:06.156049  259177 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:46:06.156072  259177 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:46:06.158635  259177 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:46:06.159406  259177 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:46:06.263549  259177 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:46:06.195249559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	W1231 10:46:06.263707  259177 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:46:06.263738  259177 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:46:06.266310  259177 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:46:06.266476  259177 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1231 10:46:06.266517  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:06.266529  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:06.266555  259177 start_flags.go:298] config:
	{Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] Star
tHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:06.269292  259177 out.go:176] * Starting control plane node default-k8s-different-port-20211231103230-6736 in cluster default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.269341  259177 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:46:06.271330  259177 out.go:176] * Pulling base image ...
	I1231 10:46:06.271382  259177 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:46:06.271427  259177 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4
	I1231 10:46:06.271440  259177 cache.go:57] Caching tarball of preloaded images
	I1231 10:46:06.271488  259177 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:46:06.271687  259177 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:46:06.271703  259177 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.1 on containerd
	I1231 10:46:06.271838  259177 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/config.json ...
	I1231 10:46:06.311882  259177 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:46:06.311924  259177 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:46:06.311934  259177 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:46:06.311964  259177 start.go:313] acquiring machines lock for default-k8s-different-port-20211231103230-6736: {Name:mk03f3b7e941bf8396158e471587c1c8924400ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:46:06.312088  259177 start.go:317] acquired machines lock for "default-k8s-different-port-20211231103230-6736" in 90µs
	I1231 10:46:06.312120  259177 start.go:93] Skipping create...Using existing machine configuration
	I1231 10:46:06.312127  259177 fix.go:55] fixHost starting: 
	I1231 10:46:06.312392  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:06.348944  259177 fix.go:108] recreateIfNeeded on default-k8s-different-port-20211231103230-6736: state=Stopped err=<nil>
	W1231 10:46:06.348985  259177 fix.go:134] unexpected machine state, will restart: <nil>
	I1231 10:46:04.629364  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:06.630805  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:06.352123  259177 out.go:176] * Restarting existing docker container for "default-k8s-different-port-20211231103230-6736" ...
	I1231 10:46:06.352210  259177 cli_runner.go:133] Run: docker start default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.813658  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:06.855625  259177 kic.go:420] container "default-k8s-different-port-20211231103230-6736" state is running.
	I1231 10:46:06.856072  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.893499  259177 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/config.json ...
	I1231 10:46:06.893751  259177 machine.go:88] provisioning docker machine ...
	I1231 10:46:06.893775  259177 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:06.893829  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.942482  259177 main.go:130] libmachine: Using SSH client type: native
	I1231 10:46:06.942732  259177 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I1231 10:46:06.942761  259177 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20211231103230-6736 && echo "default-k8s-different-port-20211231103230-6736" | sudo tee /etc/hostname
	I1231 10:46:06.944176  259177 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49220->127.0.0.1:49432: read: connection reset by peer
	I1231 10:46:10.090471  259177 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20211231103230-6736
	
	I1231 10:46:10.090566  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.126309  259177 main.go:130] libmachine: Using SSH client type: native
	I1231 10:46:10.126479  259177 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I1231 10:46:10.126500  259177 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20211231103230-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20211231103230-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20211231103230-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:46:10.265308  259177 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:46:10.265342  259177 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:46:10.265381  259177 ubuntu.go:177] setting up certificates
	I1231 10:46:10.265390  259177 provision.go:83] configureAuth start
	I1231 10:46:10.265438  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.304699  259177 provision.go:138] copyHostCerts
	I1231 10:46:10.304774  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:46:10.304797  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:46:10.304931  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:46:10.305111  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:46:10.305135  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:46:10.305188  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:46:10.305275  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:46:10.305288  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:46:10.305319  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:46:10.305369  259177 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20211231103230-6736 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20211231103230-6736]
	I1231 10:46:10.388915  259177 provision.go:172] copyRemoteCerts
	I1231 10:46:10.388990  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:46:10.389022  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.428508  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.524630  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:46:10.546323  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I1231 10:46:10.568067  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1231 10:46:10.589042  259177 provision.go:86] duration metric: configureAuth took 323.641763ms
	I1231 10:46:10.589074  259177 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:46:10.589309  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:10.589325  259177 machine.go:91] provisioned docker machine in 3.69555799s
	I1231 10:46:10.589336  259177 start.go:267] post-start starting for "default-k8s-different-port-20211231103230-6736" (driver="docker")
	I1231 10:46:10.589345  259177 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:46:10.589389  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:46:10.589435  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.626814  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.729130  259177 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:46:10.732760  259177 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:46:10.732791  259177 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:46:10.732799  259177 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:46:10.732804  259177 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:46:10.732813  259177 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:46:10.732873  259177 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:46:10.732955  259177 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:46:10.733065  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:46:10.741203  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:46:10.767998  259177 start.go:270] post-start completed in 178.644414ms
	I1231 10:46:10.768098  259177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:46:10.768143  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.814628  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:09.129448  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:11.130044  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:10.913687  259177 fix.go:57] fixHost completed within 4.601551226s
	I1231 10:46:10.913730  259177 start.go:80] releasing machines lock for "default-k8s-different-port-20211231103230-6736", held for 4.601625482s
	I1231 10:46:10.913830  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.952465  259177 ssh_runner.go:195] Run: systemctl --version
	I1231 10:46:10.952474  259177 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:46:10.952512  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.952544  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.995326  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.996790  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:11.093213  259177 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:46:11.117553  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:46:11.129631  259177 docker.go:158] disabling docker service ...
	I1231 10:46:11.129684  259177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:46:11.141035  259177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:46:11.151527  259177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:46:11.242410  259177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:46:11.332381  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:46:11.343726  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:46:11.360330  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:46:11.377258  259177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:46:11.386307  259177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:46:11.395035  259177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:46:11.484118  259177 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:46:11.570327  259177 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:46:11.570400  259177 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:46:11.575057  259177 start.go:458] Will wait 60s for crictl version
	I1231 10:46:11.575130  259177 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:46:11.606505  259177 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:46:11Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:46:13.130219  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:15.629571  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.654252  259177 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:46:22.679099  259177 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:46:22.679173  259177 ssh_runner.go:195] Run: containerd --version
	I1231 10:46:22.699642  259177 ssh_runner.go:195] Run: containerd --version
	I1231 10:46:22.724111  259177 out.go:176] * Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	I1231 10:46:22.724192  259177 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20211231103230-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:46:22.761739  259177 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1231 10:46:22.767723  259177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:46:18.129788  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:20.629221  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.629685  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.782449  259177 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:46:22.785523  259177 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:46:22.788150  259177 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:46:22.788429  259177 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:46:22.788579  259177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:46:22.817871  259177 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:46:22.817900  259177 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:46:22.817954  259177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:46:22.845121  259177 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:46:22.845143  259177 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:46:22.845184  259177 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:46:22.872122  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:22.872151  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:22.872177  259177 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:46:22.872197  259177 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.23.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20211231103230-6736 NodeName:default-k8s-different-port-20211231103230-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:46:22.872494  259177 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20211231103230-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:46:22.872594  259177 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=default-k8s-different-port-20211231103230-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1231 10:46:22.872650  259177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.1
	I1231 10:46:22.880952  259177 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:46:22.881023  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:46:22.889603  259177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (653 bytes)
	I1231 10:46:22.904645  259177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1231 10:46:22.919901  259177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1231 10:46:22.934205  259177 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:46:22.937824  259177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:46:22.948914  259177 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736 for IP: 192.168.67.2
	I1231 10:46:22.949067  259177 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:46:22.949116  259177 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:46:22.949182  259177 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/client.key
	I1231 10:46:22.949259  259177 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key.c7fa3a9e
	I1231 10:46:22.949302  259177 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.key
	I1231 10:46:22.949461  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:46:22.949511  259177 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:46:22.949519  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:46:22.949548  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:46:22.949577  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:46:22.949600  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:46:22.949638  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:46:22.950592  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:46:22.971400  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1231 10:46:22.991752  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:46:23.013577  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1231 10:46:23.034846  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:46:23.055398  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:46:23.077052  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:46:23.096743  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:46:23.116982  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:46:23.138029  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:46:23.158888  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:46:23.180681  259177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:46:23.196969  259177 ssh_runner.go:195] Run: openssl version
	I1231 10:46:23.202414  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:46:23.211307  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.215577  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.215648  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.221379  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:46:23.230004  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:46:23.240407  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.245087  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.245149  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.252376  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:46:23.261273  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:46:23.272187  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.276381  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.276439  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.282440  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:46:23.291931  259177 kubeadm.go:388] StartCluster: {Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true ex
tra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:23.292071  259177 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:46:23.292143  259177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:46:23.322595  259177 cri.go:87] found id: "2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29"
	I1231 10:46:23.322627  259177 cri.go:87] found id: "bd73d75d2e911560016e85d3108966fc37ee8b4b3657740aad59a2ef691ec869"
	I1231 10:46:23.322635  259177 cri.go:87] found id: "6f1fab877ff5d05fc062729318a8cb0acd49484779378aa54df1414cc54b604e"
	I1231 10:46:23.322641  259177 cri.go:87] found id: "631b3be24dd2b7667fa37162bef86df2c43e5cf7b5aac17096ee8f79bf749549"
	I1231 10:46:23.322646  259177 cri.go:87] found id: "a578cbf12a8e43444eec092845ea14c9a2f682965d287a9801a49dd0a7a8f11f"
	I1231 10:46:23.322652  259177 cri.go:87] found id: "78f1ab230e90175601afd355c7e642dc2f7730a2c82d7591eb54c79c0cd8fdcb"
	I1231 10:46:23.322658  259177 cri.go:87] found id: ""
	I1231 10:46:23.322704  259177 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1231 10:46:23.340383  259177 cri.go:114] JSON = null
	W1231 10:46:23.340448  259177 kubeadm.go:395] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I1231 10:46:23.340504  259177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:46:23.349236  259177 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1231 10:46:23.356926  259177 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.357908  259177 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20211231103230-6736" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:23.358387  259177 kubeconfig.go:127] "default-k8s-different-port-20211231103230-6736" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig - will repair!
	I1231 10:46:23.359199  259177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:23.361565  259177 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1231 10:46:23.369806  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.369877  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.387797  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.588254  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.588346  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.603506  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.788716  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.788802  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.805281  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.988608  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.988691  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.003485  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.188825  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.188911  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.204147  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.388346  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.388445  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.405511  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.587900  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.587979  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.603393  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.788613  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.788703  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.804658  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.988878  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.988986  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.004053  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.188400  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.188499  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.203668  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.388884  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.388958  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.404633  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.588928  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.588999  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.603675  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.787949  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.788030  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.803421  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.630080  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:27.131198  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:25.988170  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.988259  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.003489  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.188799  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.188874  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.206241  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.388554  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.388631  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.404946  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.404976  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.405015  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.421640  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W1231 10:46:26.421666  259177 kubeadm.go:600] needs reconfigure: apiserver error: timed out waiting for the condition
	I1231 10:46:26.421696  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1231 10:46:27.164679  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:46:27.176052  259177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:46:27.184966  259177 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:46:27.185023  259177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:46:27.192809  259177 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:46:27.192859  259177 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:46:29.629674  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:31.629826  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:33.630236  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:35.630628  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:41.573231  259177 out.go:203]   - Generating certificates and keys ...
	I1231 10:46:41.577004  259177 out.go:203]   - Booting up control plane ...
	I1231 10:46:41.580458  259177 out.go:203]   - Configuring RBAC rules ...
	I1231 10:46:41.582339  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:41.582359  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:38.131557  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:40.628931  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:42.629291  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:41.584853  259177 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:46:41.584972  259177 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:46:41.589132  259177 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.1/kubectl ...
	I1231 10:46:41.589160  259177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:46:41.603970  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:46:42.303042  259177 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:46:42.303113  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.303143  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=default-k8s-different-port-20211231103230-6736 minikube.k8s.io/updated_at=2021_12_31T10_46_42_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.323470  259177 ops.go:34] apiserver oom_adj: -16
	I1231 10:46:42.425794  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.997478  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:43.497994  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:43.997336  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.497461  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.997469  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:45.497141  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.629588  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:46.630233  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:45.996911  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:46.497554  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:46.996878  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:47.497472  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:47.997537  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:48.496954  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:48.997559  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.497163  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.997251  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:50.496922  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.129073  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:51.129976  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:50.997663  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:51.497167  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:51.996864  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:52.497527  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:52.997090  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:53.497799  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:53.997130  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:54.105111  259177 kubeadm.go:864] duration metric: took 11.802054034s to wait for elevateKubeSystemPrivileges.
	I1231 10:46:54.105148  259177 kubeadm.go:390] StartCluster complete in 30.81326231s
	I1231 10:46:54.105171  259177 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:54.105289  259177 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:54.108579  259177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:54.630591  259177 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20211231103230-6736" rescaled to 1
	I1231 10:46:54.630781  259177 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:46:54.634169  259177 out.go:176] * Verifying Kubernetes components...
	I1231 10:46:54.631217  259177 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I1231 10:46:54.634291  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:46:54.634321  259177 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634357  259177 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634365  259177 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:46:54.634362  259177 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.631547  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:54.631838  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:46:54.634371  259177 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634395  259177 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634410  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634414  259177 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634427  259177 addons.go:165] addon metrics-server should already be in state true
	I1231 10:46:54.634489  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634516  259177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634487  259177 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634558  259177 addons.go:165] addon dashboard should already be in state true
	I1231 10:46:54.634593  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634970  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635065  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635066  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635106  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.685830  259177 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20211231103230-6736" to be "Ready" ...
	I1231 10:46:54.699606  259177 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.699638  259177 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:46:54.699668  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.700169  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.704286  259177 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:46:54.704439  259177 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:46:54.707989  259177 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1231 10:46:54.704464  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:46:54.708151  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.708200  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1231 10:46:54.708461  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1231 10:46:54.708547  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.719755  259177 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1231 10:46:54.722183  259177 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1231 10:46:54.722299  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1231 10:46:54.722320  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1231 10:46:54.722396  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.765928  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.766391  259177 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:46:54.766411  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:46:54.766457  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.767368  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.784353  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.795209  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:46:54.825887  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:55.079913  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1231 10:46:55.079940  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1231 10:46:55.081041  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:46:55.101407  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:46:55.101636  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1231 10:46:55.101669  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1231 10:46:55.182759  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1231 10:46:55.182791  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1231 10:46:55.192507  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1231 10:46:55.192539  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1231 10:46:55.210370  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:46:55.210394  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1231 10:46:55.280215  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1231 10:46:55.280281  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1231 10:46:55.292923  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:46:55.303427  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1231 10:46:55.303453  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1231 10:46:55.390564  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1231 10:46:55.390603  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1231 10:46:55.488578  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1231 10:46:55.488607  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1231 10:46:55.493706  259177 start.go:773] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I1231 10:46:55.509144  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1231 10:46:55.509179  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1231 10:46:55.591504  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1231 10:46:55.591541  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1231 10:46:55.611957  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:46:55.611987  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1231 10:46:55.692647  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:46:56.280527  259177 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:56.694434  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:46:57.215361  259177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.522655734s)
	I1231 10:46:53.130645  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:55.628936  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:57.630008  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:57.218195  259177 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1231 10:46:57.218237  259177 addons.go:417] enableAddons completed in 2.587057511s
	I1231 10:46:59.194381  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:00.129798  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:02.629613  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:01.195147  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:03.195279  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:05.694625  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:05.129931  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:07.629077  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:08.193944  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:10.195307  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:09.629227  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:12.129630  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:12.695675  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:15.195343  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:14.629484  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:17.129904  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:17.693570  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:19.693815  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:19.629766  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:21.630502  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:21.694204  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:23.694541  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:25.694753  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:24.130302  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:26.629329  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:28.193820  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:30.694197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:28.630092  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:30.630190  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:32.694753  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:35.194657  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:33.129099  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:35.129944  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:37.629071  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:37.694883  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:40.194845  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:39.629460  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:41.629639  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:42.694778  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:45.194603  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:44.129618  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:46.628987  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:47.693566  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:49.694322  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:48.629810  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:51.129720  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:52.194896  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:54.694088  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:53.629970  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:55.630008  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:56.694160  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:58.694485  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:00.695020  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:58.129386  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:00.629184  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:02.630921  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:03.194853  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:05.195360  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:05.129367  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:07.629362  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:07.197249  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:09.693805  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:09.629727  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:11.132106  253675 node_ready.go:38] duration metric: took 4m0.023004433s waiting for node "embed-certs-20211231102953-6736" to be "Ready" ...
	I1231 10:48:11.135160  253675 out.go:176] 
	W1231 10:48:11.135314  253675 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:48:11.135327  253675 out.go:241] * 
	W1231 10:48:11.136204  253675 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1231 10:48:11.694445  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:13.694502  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:15.695650  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:18.193879  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:20.194982  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:22.694025  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:24.695099  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:26.695542  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:29.193781  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:31.196683  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:33.693756  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:35.694396  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:37.694612  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:40.194187  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:42.194624  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:44.694809  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:47.194396  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:49.194998  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:51.693920  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:54.194059  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:56.194946  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:58.694370  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:01.194666  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:03.693794  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:05.693979  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:07.694832  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:10.193727  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:12.196818  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:14.694785  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:17.194197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:19.195220  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:21.693956  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:24.193839  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:26.194434  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:28.694344  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:31.194654  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:33.694374  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:36.194938  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:38.693771  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:40.694132  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:42.694270  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:44.694407  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:47.194463  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:49.194582  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:51.694082  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:53.694472  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:56.194226  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:58.194871  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:00.694067  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:02.694491  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:04.695370  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:07.193813  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:09.194857  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:11.693897  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:13.694405  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:15.695280  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:18.194197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:20.694632  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:23.193998  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:25.194494  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:27.194919  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:29.693725  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:31.694052  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:34.194397  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:36.695239  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:39.194905  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:41.693646  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:43.694494  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:46.193735  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:48.194245  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:50.694846  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:53.194883  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:54.696282  259177 node_ready.go:38] duration metric: took 4m0.010315548s waiting for node "default-k8s-different-port-20211231103230-6736" to be "Ready" ...
	I1231 10:50:54.698863  259177 out.go:176] 
	W1231 10:50:54.699054  259177 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:50:54.699075  259177 out.go:241] * 
	W1231 10:50:54.699945  259177 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	b452495aa7e45       6de166512aa22       26 seconds ago      Exited              kindnet-cni               5                   5b9b47832c3c9
	6de5e38677b20       b46c42588d511       4 minutes ago       Running             kube-proxy                0                   e57926d0ab3c5
	48225c99a0965       25f8c7f3da61c       4 minutes ago       Running             etcd                      1                   0eff770ba2d39
	be441fc987f41       b6d7abedde399       4 minutes ago       Running             kube-apiserver            1                   9be9eca9d95fc
	d693c50da8741       f51846a4fd288       4 minutes ago       Running             kube-controller-manager   1                   9c10142d32214
	d05e688d9162e       71d575efe6283       4 minutes ago       Running             kube-scheduler            1                   b0d0ced0300d8
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-12-31 10:46:07 UTC, end at Fri 2021-12-31 10:50:55 UTC. --
	Dec 31 10:48:15 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:48:15.237774858Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:48:15 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:48:15.980080667Z" level=info msg="RemoveContainer for \"e3030ceb8c9c93e220289d640a23e9dc1e5e5ed73df42ba0f4482c69a83e315b\""
	Dec 31 10:48:15 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:48:15.986079566Z" level=info msg="RemoveContainer for \"e3030ceb8c9c93e220289d640a23e9dc1e5e5ed73df42ba0f4482c69a83e315b\" returns successfully"
	Dec 31 10:48:55 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:48:55.714170839Z" level=info msg="CreateContainer within sandbox \"5b9b47832c3c9608978186904d83608afaba148d142bf3d9b486c8d8966dbbdf\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Dec 31 10:48:55 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:48:55.741672837Z" level=info msg="CreateContainer within sandbox \"5b9b47832c3c9608978186904d83608afaba148d142bf3d9b486c8d8966dbbdf\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"e3fed7b69e8787d1bcdb35975c6c7b6ed02981fd38c158bca2baf43c66a02378\""
	Dec 31 10:48:55 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:48:55.742247648Z" level=info msg="StartContainer for \"e3fed7b69e8787d1bcdb35975c6c7b6ed02981fd38c158bca2baf43c66a02378\""
	Dec 31 10:48:56 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:48:56.004951086Z" level=info msg="StartContainer for \"e3fed7b69e8787d1bcdb35975c6c7b6ed02981fd38c158bca2baf43c66a02378\" returns successfully"
	Dec 31 10:49:06 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:49:06.302200493Z" level=info msg="Finish piping stderr of container \"e3fed7b69e8787d1bcdb35975c6c7b6ed02981fd38c158bca2baf43c66a02378\""
	Dec 31 10:49:06 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:49:06.302309643Z" level=info msg="Finish piping stdout of container \"e3fed7b69e8787d1bcdb35975c6c7b6ed02981fd38c158bca2baf43c66a02378\""
	Dec 31 10:49:06 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:49:06.303109762Z" level=info msg="TaskExit event &TaskExit{ContainerID:e3fed7b69e8787d1bcdb35975c6c7b6ed02981fd38c158bca2baf43c66a02378,ID:e3fed7b69e8787d1bcdb35975c6c7b6ed02981fd38c158bca2baf43c66a02378,Pid:2470,ExitStatus:2,ExitedAt:2021-12-31 10:49:06.302719969 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:49:06 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:49:06.332086618Z" level=info msg="shim disconnected" id=e3fed7b69e8787d1bcdb35975c6c7b6ed02981fd38c158bca2baf43c66a02378
	Dec 31 10:49:06 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:49:06.332198584Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:49:07 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:49:07.083403108Z" level=info msg="RemoveContainer for \"2705400d4c0f39099dd7b8941c1bb9ec74e878287bf34635ea3a014af830107b\""
	Dec 31 10:49:07 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:49:07.089472251Z" level=info msg="RemoveContainer for \"2705400d4c0f39099dd7b8941c1bb9ec74e878287bf34635ea3a014af830107b\" returns successfully"
	Dec 31 10:50:29 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:50:29.713275394Z" level=info msg="CreateContainer within sandbox \"5b9b47832c3c9608978186904d83608afaba148d142bf3d9b486c8d8966dbbdf\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:5,}"
	Dec 31 10:50:29 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:50:29.740101884Z" level=info msg="CreateContainer within sandbox \"5b9b47832c3c9608978186904d83608afaba148d142bf3d9b486c8d8966dbbdf\" for &ContainerMetadata{Name:kindnet-cni,Attempt:5,} returns container id \"b452495aa7e452cf74da07fd57f9f73f18d096657904345825605e9beb3f73ff\""
	Dec 31 10:50:29 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:50:29.740955219Z" level=info msg="StartContainer for \"b452495aa7e452cf74da07fd57f9f73f18d096657904345825605e9beb3f73ff\""
	Dec 31 10:50:29 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:50:29.985118878Z" level=info msg="StartContainer for \"b452495aa7e452cf74da07fd57f9f73f18d096657904345825605e9beb3f73ff\" returns successfully"
	Dec 31 10:50:40 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:50:40.288084704Z" level=info msg="Finish piping stderr of container \"b452495aa7e452cf74da07fd57f9f73f18d096657904345825605e9beb3f73ff\""
	Dec 31 10:50:40 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:50:40.288131174Z" level=info msg="Finish piping stdout of container \"b452495aa7e452cf74da07fd57f9f73f18d096657904345825605e9beb3f73ff\""
	Dec 31 10:50:40 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:50:40.289608472Z" level=info msg="TaskExit event &TaskExit{ContainerID:b452495aa7e452cf74da07fd57f9f73f18d096657904345825605e9beb3f73ff,ID:b452495aa7e452cf74da07fd57f9f73f18d096657904345825605e9beb3f73ff,Pid:2543,ExitStatus:2,ExitedAt:2021-12-31 10:50:40.289359828 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:50:40 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:50:40.316899784Z" level=info msg="shim disconnected" id=b452495aa7e452cf74da07fd57f9f73f18d096657904345825605e9beb3f73ff
	Dec 31 10:50:40 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:50:40.317007489Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:50:41 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:50:41.262130696Z" level=info msg="RemoveContainer for \"e3fed7b69e8787d1bcdb35975c6c7b6ed02981fd38c158bca2baf43c66a02378\""
	Dec 31 10:50:41 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:50:41.269488288Z" level=info msg="RemoveContainer for \"e3fed7b69e8787d1bcdb35975c6c7b6ed02981fd38c158bca2baf43c66a02378\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20211231103230-6736
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20211231103230-6736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb
	                    minikube.k8s.io/name=default-k8s-different-port-20211231103230-6736
	                    minikube.k8s.io/updated_at=2021_12_31T10_46_42_0700
	                    minikube.k8s.io/version=v1.24.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 Dec 2021 10:46:38 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20211231103230-6736
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 Dec 2021 10:50:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 Dec 2021 10:46:53 +0000   Fri, 31 Dec 2021 10:46:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 Dec 2021 10:46:53 +0000   Fri, 31 Dec 2021 10:46:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 Dec 2021 10:46:53 +0000   Fri, 31 Dec 2021 10:46:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 31 Dec 2021 10:46:53 +0000   Fri, 31 Dec 2021 10:46:35 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    default-k8s-different-port-20211231103230-6736
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	  System UUID:                60ec9bed-9ff2-4db1-b438-2738c19f5f1f
	  Boot ID:                    e6f4bca3-a61e-4e63-a68d-0c73a5c2c999
	  Kernel Version:             5.11.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.12
	  Kubelet Version:            v1.23.1
	  Kube-Proxy Version:         v1.23.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20211231103230-6736                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m9s
	  kube-system                 kindnet-5x2g8                                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m3s
	  kube-system                 kube-apiserver-default-k8s-different-port-20211231103230-6736             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20211231103230-6736    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-proxy-8f86l                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-default-k8s-different-port-20211231103230-6736             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 4m1s                   kube-proxy  
	  Normal  NodeHasSufficientMemory  4m23s (x5 over 4m23s)  kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x5 over 4m23s)  kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x5 over 4m23s)  kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m10s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s                  kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s                  kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s                  kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 da bc c6 d3 d8 08 06
	[  +0.905080] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vetha337dd59
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 d5 c5 d8 9b b7 08 06
	[  +0.767220] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethad0df3e8
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 45 74 46 b2 ed 08 06
	[  +0.175235] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethfa30cc3c
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 f5 8d 27 39 52 08 06
	[  +2.267937] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.022406] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.024047] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.951751] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.011878] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023951] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.963808] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.003832] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023918] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	
	* 
	* ==> etcd [48225c99a09655e9f407ab2f3c22787aa4836d3b837422f8080fb6cb20c5e755] <==
	* {"level":"info","ts":"2021-12-31T10:46:34.919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2021-12-31T10:46:34.919Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2021-12-31T10:46:34.980Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-12-31T10:46:34.981Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2021-12-31T10:46:34.981Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2021-12-31T10:46:34.981Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-12-31T10:46:34.981Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-12-31T10:46:35.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2021-12-31T10:46:35.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2021-12-31T10:46:35.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2021-12-31T10:46:35.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2021-12-31T10:46:35.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2021-12-31T10:46:35.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2021-12-31T10:46:35.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2021-12-31T10:46:35.700Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:46:35.700Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:default-k8s-different-port-20211231103230-6736 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2021-12-31T10:46:35.700Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:46:35.700Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:46:35.700Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-12-31T10:46:35.700Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-12-31T10:46:35.701Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:46:35.701Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:46:35.701Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:46:35.701Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-12-31T10:46:35.702Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	
	* 
	* ==> kernel <==
	*  10:50:56 up  1:33,  0 users,  load average: 0.74, 0.86, 1.45
	Linux default-k8s-different-port-20211231103230-6736 5.11.0-1023-gcp #25~20.04.1-Ubuntu SMP Mon Nov 15 15:54:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [be441fc987f419ba5ae606c3dc7e79411ef7643081ffec1e0932415a1faec812] <==
	* I1231 10:46:39.811139       1 controller.go:611] quota admission added evaluator for: endpoints
	I1231 10:46:39.816524       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1231 10:46:40.101353       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1231 10:46:41.419653       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1231 10:46:41.431907       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1231 10:46:41.488142       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1231 10:46:46.607244       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1231 10:46:53.458797       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1231 10:46:53.908276       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1231 10:46:54.607390       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1231 10:46:56.220169       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.106.28.91]
	W1231 10:46:56.986887       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 10:46:56.986961       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:46:56.986973       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1231 10:46:57.190096       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.105.190.133]
	I1231 10:46:57.209191       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.103.250.88]
	W1231 10:47:56.988145       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 10:47:56.988215       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:47:56.988223       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 10:49:56.988608       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 10:49:56.988704       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:49:56.988723       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [d693c50da8741c5da85b83ff32c1c5fd9e21b99ab0afa5b593e83551ee247dd9] <==
	* I1231 10:46:57.021921       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-ccd587f44-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E1231 10:46:57.022479       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1231 10:46:57.022518       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1231 10:46:57.081967       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-ccd587f44-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E1231 10:46:57.081991       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-ccd587f44" failed with pods "kubernetes-dashboard-ccd587f44-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1231 10:46:57.082812       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1231 10:46:57.082826       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1231 10:46:57.107316       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-ccd587f44-gqg75"
	I1231 10:46:57.181007       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-b7pvv"
	E1231 10:47:23.178896       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:47:23.597578       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:47:53.197119       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:47:53.611567       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:48:23.215038       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:48:23.627328       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:48:53.233737       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:48:53.644466       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:49:23.249793       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:49:23.661306       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:49:53.266660       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:49:53.684045       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:50:23.285298       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:50:23.701566       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:50:53.300554       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:50:53.723471       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [6de5e38677b20bde5bb55f835d713191bc7054d75f34642dd9f38b9df161d628] <==
	* I1231 10:46:54.503302       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I1231 10:46:54.503392       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I1231 10:46:54.503442       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1231 10:46:54.603822       1 server_others.go:206] "Using iptables Proxier"
	I1231 10:46:54.603869       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1231 10:46:54.603877       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1231 10:46:54.603895       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1231 10:46:54.604363       1 server.go:656] "Version info" version="v1.23.1"
	I1231 10:46:54.604971       1 config.go:226] "Starting endpoint slice config controller"
	I1231 10:46:54.604993       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1231 10:46:54.605095       1 config.go:317] "Starting service config controller"
	I1231 10:46:54.605109       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1231 10:46:54.705619       1 shared_informer.go:247] Caches are synced for service config 
	I1231 10:46:54.705653       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [d05e688d9162e6301a11f7be3af706989ab36e552895ab4ab8adeecc620fc5d7] <==
	* W1231 10:46:38.096166       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1231 10:46:38.096271       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1231 10:46:38.096104       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:46:38.096287       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1231 10:46:38.095998       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:46:38.096303       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1231 10:46:38.950364       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1231 10:46:38.950693       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1231 10:46:38.964806       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1231 10:46:38.964848       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1231 10:46:39.082347       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1231 10:46:39.082428       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1231 10:46:39.127797       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1231 10:46:39.127849       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1231 10:46:39.179308       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:46:39.179367       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1231 10:46:39.201670       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1231 10:46:39.201705       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1231 10:46:39.233921       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:46:39.233975       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1231 10:46:39.285801       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1231 10:46:39.285843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1231 10:46:39.306180       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:46:39.306224       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1231 10:46:41.082427       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-12-31 10:46:07 UTC, end at Fri 2021-12-31 10:50:56 UTC. --
	Dec 31 10:49:51 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:49:51.907623    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:49:55 default-k8s-different-port-20211231103230-6736 kubelet[1372]: I1231 10:49:55.711106    1372 scope.go:110] "RemoveContainer" containerID="e3fed7b69e8787d1bcdb35975c6c7b6ed02981fd38c158bca2baf43c66a02378"
	Dec 31 10:49:55 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:49:55.711402    1372 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-5x2g8_kube-system(48d54ddb-3cf9-429c-9cc4-7ee13b833262)\"" pod="kube-system/kindnet-5x2g8" podUID=48d54ddb-3cf9-429c-9cc4-7ee13b833262
	Dec 31 10:49:56 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:49:56.908572    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:50:01 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:50:01.909820    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:50:06 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:50:06.910865    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:50:07 default-k8s-different-port-20211231103230-6736 kubelet[1372]: I1231 10:50:07.711204    1372 scope.go:110] "RemoveContainer" containerID="e3fed7b69e8787d1bcdb35975c6c7b6ed02981fd38c158bca2baf43c66a02378"
	Dec 31 10:50:07 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:50:07.711481    1372 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-5x2g8_kube-system(48d54ddb-3cf9-429c-9cc4-7ee13b833262)\"" pod="kube-system/kindnet-5x2g8" podUID=48d54ddb-3cf9-429c-9cc4-7ee13b833262
	Dec 31 10:50:11 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:50:11.912409    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:50:16 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:50:16.913517    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:50:18 default-k8s-different-port-20211231103230-6736 kubelet[1372]: I1231 10:50:18.711600    1372 scope.go:110] "RemoveContainer" containerID="e3fed7b69e8787d1bcdb35975c6c7b6ed02981fd38c158bca2baf43c66a02378"
	Dec 31 10:50:18 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:50:18.711912    1372 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-5x2g8_kube-system(48d54ddb-3cf9-429c-9cc4-7ee13b833262)\"" pod="kube-system/kindnet-5x2g8" podUID=48d54ddb-3cf9-429c-9cc4-7ee13b833262
	Dec 31 10:50:21 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:50:21.914433    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:50:26 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:50:26.915717    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:50:29 default-k8s-different-port-20211231103230-6736 kubelet[1372]: I1231 10:50:29.710855    1372 scope.go:110] "RemoveContainer" containerID="e3fed7b69e8787d1bcdb35975c6c7b6ed02981fd38c158bca2baf43c66a02378"
	Dec 31 10:50:31 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:50:31.917588    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:50:36 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:50:36.919300    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:50:41 default-k8s-different-port-20211231103230-6736 kubelet[1372]: I1231 10:50:41.260866    1372 scope.go:110] "RemoveContainer" containerID="e3fed7b69e8787d1bcdb35975c6c7b6ed02981fd38c158bca2baf43c66a02378"
	Dec 31 10:50:41 default-k8s-different-port-20211231103230-6736 kubelet[1372]: I1231 10:50:41.261266    1372 scope.go:110] "RemoveContainer" containerID="b452495aa7e452cf74da07fd57f9f73f18d096657904345825605e9beb3f73ff"
	Dec 31 10:50:41 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:50:41.261585    1372 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kindnet-cni pod=kindnet-5x2g8_kube-system(48d54ddb-3cf9-429c-9cc4-7ee13b833262)\"" pod="kube-system/kindnet-5x2g8" podUID=48d54ddb-3cf9-429c-9cc4-7ee13b833262
	Dec 31 10:50:41 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:50:41.920675    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:50:46 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:50:46.922253    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:50:51 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:50:51.923794    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:50:53 default-k8s-different-port-20211231103230-6736 kubelet[1372]: I1231 10:50:53.711355    1372 scope.go:110] "RemoveContainer" containerID="b452495aa7e452cf74da07fd57f9f73f18d096657904345825605e9beb3f73ff"
	Dec 31 10:50:53 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:50:53.711849    1372 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kindnet-cni pod=kindnet-5x2g8_kube-system(48d54ddb-3cf9-429c-9cc4-7ee13b833262)\"" pod="kube-system/kindnet-5x2g8" podUID=48d54ddb-3cf9-429c-9cc4-7ee13b833262
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20211231103230-6736 -n default-k8s-different-port-20211231103230-6736
helpers_test.go:262: (dbg) Run:  kubectl --context default-k8s-different-port-20211231103230-6736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-64897985d-xx94d metrics-server-7f49dcbd7-gj6bf storage-provisioner dashboard-metrics-scraper-56974995fc-b7pvv kubernetes-dashboard-ccd587f44-gqg75
helpers_test.go:273: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20211231103230-6736 describe pod coredns-64897985d-xx94d metrics-server-7f49dcbd7-gj6bf storage-provisioner dashboard-metrics-scraper-56974995fc-b7pvv kubernetes-dashboard-ccd587f44-gqg75
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20211231103230-6736 describe pod coredns-64897985d-xx94d metrics-server-7f49dcbd7-gj6bf storage-provisioner dashboard-metrics-scraper-56974995fc-b7pvv kubernetes-dashboard-ccd587f44-gqg75: exit status 1 (81.557306ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-xx94d" not found
	Error from server (NotFound): pods "metrics-server-7f49dcbd7-gj6bf" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-b7pvv" not found
	Error from server (NotFound): pods "kubernetes-dashboard-ccd587f44-gqg75" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context default-k8s-different-port-20211231103230-6736 describe pod coredns-64897985d-xx94d metrics-server-7f49dcbd7-gj6bf storage-provisioner dashboard-metrics-scraper-56974995fc-b7pvv kubernetes-dashboard-ccd587f44-gqg75: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/SecondStart (291.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-dwbwm" [5a12cb6a-d97e-4025-b6a0-9421173550dc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
E1231 10:48:29.443854    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
E1231 10:49:41.556854    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
E1231 10:49:52.489630    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
E1231 10:50:10.309789    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
E1231 10:50:22.105079    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory
E1231 10:50:36.756722    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory
E1231 10:50:43.946000    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:259: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20211231102953-6736 -n embed-certs-20211231102953-6736
start_stop_delete_test.go:259: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2021-12-31 10:57:14.056913481 +0000 UTC m=+4533.038729421
start_stop_delete_test.go:259: (dbg) Run:  kubectl --context embed-certs-20211231102953-6736 describe po kubernetes-dashboard-ccd587f44-dwbwm -n kubernetes-dashboard
start_stop_delete_test.go:259: (dbg) kubectl --context embed-certs-20211231102953-6736 describe po kubernetes-dashboard-ccd587f44-dwbwm -n kubernetes-dashboard:
Name:           kubernetes-dashboard-ccd587f44-dwbwm
Namespace:      kubernetes-dashboard
Priority:       0
Node:           <none>
Labels:         gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=ccd587f44
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/kubernetes-dashboard-ccd587f44
Containers:
kubernetes-dashboard:
Image:      kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e
Port:       9090/TCP
Host Port:  0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
Liveness:     http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:  <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-688hb (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-688hb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              beta.kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                 From               Message
----     ------            ----                ----               -------
Warning  FailedScheduling  45s (x13 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
start_stop_delete_test.go:259: (dbg) Run:  kubectl --context embed-certs-20211231102953-6736 logs kubernetes-dashboard-ccd587f44-dwbwm -n kubernetes-dashboard
start_stop_delete_test.go:259: (dbg) kubectl --context embed-certs-20211231102953-6736 logs kubernetes-dashboard-ccd587f44-dwbwm -n kubernetes-dashboard:
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect embed-certs-20211231102953-6736
helpers_test.go:236: (dbg) docker inspect embed-certs-20211231102953-6736:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676",
	        "Created": "2021-12-31T10:30:07.254073431Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 253939,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-12-31T10:43:23.613074153Z",
	            "FinishedAt": "2021-12-31T10:43:22.23405709Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/hostname",
	        "HostsPath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/hosts",
	        "LogPath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676-json.log",
	        "Name": "/embed-certs-20211231102953-6736",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-20211231102953-6736:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20211231102953-6736",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e-init/diff:/var/lib/docker/overlay2/c534a67c1f4c5b9f1d9e8fda7874029a0b48633454e02a0b6c810aa9cf70f36b/diff:/var/lib/docker/overlay2/de7a9fc119b7ade7c9f3b24bea5108d56a2268af67d230db0cccbbf8190cc502/diff:/var/lib/docker/overlay2/7dde3b10db81d42c3216201bd8676123e0e230bfa173b31e053af8fa206a800e/diff:/var/lib/docker/overlay2/97454ec819da74666dcc72f8e5b4653886c44f3699cecf13a018ecf95d935273/diff:/var/lib/docker/overlay2/f63f64cf22a9591cbcafeb284a265882ea890544f2b083b747047bcf47a45f11/diff:/var/lib/docker/overlay2/85a2123491b64d14f7493b3bc6a519d9df28cc3680976b9e5a7ecf783be5a304/diff:/var/lib/docker/overlay2/2ec15009b968007c3be12325edca4c12821b52fc616ace66d456038c7f9b8b26/diff:/var/lib/docker/overlay2/c45e9285ae7064da0be5f07d29b035ad54f64e529db00ffe6f2dce6ad543dccf/diff:/var/lib/docker/overlay2/d95000893ca52351a7804b110df43c1656b4bc7fbe4940c9e0ff2744233b1f93/diff:/var/lib/docker/overlay2/38e3bf
07bc49f88e1e7214644d31095a6178f5156c6a5ff3fbcaf8676336a52d/diff:/var/lib/docker/overlay2/fe45a9a5036ea98542c5fd7808e66f8b6ffe9d0c545de700f7cbe39d6280e0e3/diff:/var/lib/docker/overlay2/8cd90b043b30c66cbeacbe4ca21881d85717c6e16f1fe627e2eecdeaf461da6c/diff:/var/lib/docker/overlay2/7221b66a068ae65d9a1c089fe5c60e7230aa950b3d47eb7e90a1776c2a45931b/diff:/var/lib/docker/overlay2/86ce10ecfb90cfd8e1f7d722715353b823ddfa58b93eefa2c1efffff899916ee/diff:/var/lib/docker/overlay2/12ea1316fade88c6e1d2272d6dbeb629875ff96f4dd6f6ead0ff6a4e7e3a3067/diff:/var/lib/docker/overlay2/0fe276873166ac11302901f1bb2c547c71e23fac2197e717e789b6ce7d1eb3ce/diff:/var/lib/docker/overlay2/81bd863acb94ff576a1f1f046ee87693930a12209e915a09f2a720c77d1357f3/diff:/var/lib/docker/overlay2/8875d87537d51e137e2fbff41fe61dadeefa46057e0c93d257ec4754b5ba95f3/diff:/var/lib/docker/overlay2/86e75ae4c708ad2e78d21326ca3b7f552b7903d8c7a0a66e086feaacd3a8c002/diff:/var/lib/docker/overlay2/c688ba4459332409ab260296b96d566d186d307b81308fb3b59724fe6fed3c7d/diff:/var/lib/d
ocker/overlay2/45c4b946a887c9de4f592591b3e65041083baba75d5bed9b0995339de1cafd02/diff:/var/lib/docker/overlay2/3927af3e4dc5e4af94ac76cbe5a3b9759b80e10646aabc655f5c9b56ae13e650/diff:/var/lib/docker/overlay2/11dd3efafe062edcbd4dd837545d570514492786b08abf77f57fe300d225eaa5/diff:/var/lib/docker/overlay2/b8e0f4d5369eb0c633092d9e21501d99ccb27fafaad6667209e1d988c337ae9b/diff:/var/lib/docker/overlay2/c605476b22b48b266163ca6ee5fcb52b15458f1d7a77458390109b6e795f72f3/diff:/var/lib/docker/overlay2/19b31873fbc8858f6b143ea8dc98572f32110c70c29732285bdea89dd6163287/diff:/var/lib/docker/overlay2/171afcaaccc199dc3a653d1bf42453959e7c6bd7bc1e8340f94454b5f5fb6d2d/diff:/var/lib/docker/overlay2/2000b0fa8d671b6e34ec28228440eb7e6a5ddcd6b2cb657836d8522e3fb4454e/diff:/var/lib/docker/overlay2/1294bf56384681b45b611fe07d7b32c6c87a67eee568452d1f8123bf319f7a4c/diff:/var/lib/docker/overlay2/3199bb762e90968c1998250099f781c0e8954f4d62df39a80a384b9208149421/diff:/var/lib/docker/overlay2/036a54cd701c84ee783285b30f6cf33b92858f1a11ebd5db15cffffea12
adcd9/diff:/var/lib/docker/overlay2/552a166f9d1f0d79a7c45c08db35c6db9366e5d8e1321b62ffc3d8a9ea458797/diff:/var/lib/docker/overlay2/98f34e7af675026bd6dd9aaf43b996b5a992e04fe33f238d16d38e7ccad2fb91/diff:/var/lib/docker/overlay2/c1ed97ca4514bd37713c82daa6ba805285f12b5aa51599e9f915b8755fbf55a7/diff:/var/lib/docker/overlay2/86b8ba00550442fc509be298ff07205e99a171d56a70a2e1735cdfd4a9ab745e/diff:/var/lib/docker/overlay2/109092c1424dfee38a4eb63e39a9bb5d3de4b011f84e30bd021bf73ee2f47aa1/diff:/var/lib/docker/overlay2/87fd60ac9e5dd5a701def2c0a9e1c64463bd4779be06a990c86e5561a4349312/diff:/var/lib/docker/overlay2/145545da7fd578da8a41adec4cec3a9b347b4e609c900ee9365ae7735c7859c6/diff:/var/lib/docker/overlay2/e6a20d33de71269404b7fdc1fa6f915dcbc5a337e0625ef4536cb942bb55c19b/diff:/var/lib/docker/overlay2/38829872ccac03d0dae45b93d1f34e61fd179b6fc19f22fd27d4431ee773090f/diff:/var/lib/docker/overlay2/3c7fa7cc3994b36d6b984ebe7c46b535829f02814130486c4ecf5a0976f4f7dc/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20211231102953-6736",
	                "Source": "/var/lib/docker/volumes/embed-certs-20211231102953-6736/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20211231102953-6736",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20211231102953-6736",
	                "name.minikube.sigs.k8s.io": "embed-certs-20211231102953-6736",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dfc73bddd32d4a580e80ede53e861c2019c40094c0f4bf8dbec95ea0223d20b5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49427"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49426"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49423"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49424"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dfc73bddd32d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20211231102953-6736": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "de3bee7bab0c",
	                        "embed-certs-20211231102953-6736"
	                    ],
	                    "NetworkID": "821d0d66bcf3a6ca41969ece76bf8b556f86e66628fb90783541e59bdec0e994",
	                    "EndpointID": "9ce1b9b9e6af1217d03fa31376bf39eb0632af0bb5247bc92fc3c48c1620d77a",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20211231102953-6736 -n embed-certs-20211231102953-6736

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20211231102953-6736 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-20211231102953-6736 logs -n 25: (1.251690468s)
helpers_test.go:253: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| pause   | -p                                                | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:52 UTC | Fri, 31 Dec 2021 10:34:53 UTC |
	|         | newest-cni-20211231103230-6736                    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:53 UTC | Fri, 31 Dec 2021 10:34:54 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| unpause | -p                                                | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:54 UTC | Fri, 31 Dec 2021 10:34:55 UTC |
	|         | newest-cni-20211231103230-6736                    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                |         |         |                               |                               |
	| delete  | -p                                                | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:56 UTC | Fri, 31 Dec 2021 10:34:59 UTC |
	|         | newest-cni-20211231103230-6736                    |                                                |         |         |                               |                               |
	| delete  | -p                                                | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:59 UTC | Fri, 31 Dec 2021 10:35:00 UTC |
	|         | newest-cni-20211231103230-6736                    |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:37:37 UTC | Fri, 31 Dec 2021 10:37:38 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:02 UTC | Fri, 31 Dec 2021 10:39:03 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:05 UTC | Fri, 31 Dec 2021 10:39:06 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:06 UTC | Fri, 31 Dec 2021 10:39:07 UTC |
	|         | old-k8s-version-20211231102602-6736               |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:07 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736               |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:28 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736               |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:57 UTC | Fri, 31 Dec 2021 10:42:58 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:59 UTC | Fri, 31 Dec 2021 10:43:00 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:01 UTC | Fri, 31 Dec 2021 10:43:02 UTC |
	|         | embed-certs-20211231102953-6736                   |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:02 UTC | Fri, 31 Dec 2021 10:43:22 UTC |
	|         | embed-certs-20211231102953-6736                   |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:22 UTC | Fri, 31 Dec 2021 10:43:22 UTC |
	|         | embed-certs-20211231102953-6736                   |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:44:18 UTC | Fri, 31 Dec 2021 10:44:19 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:40 UTC | Fri, 31 Dec 2021 10:45:41 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:42 UTC | Fri, 31 Dec 2021 10:45:43 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:44 UTC | Fri, 31 Dec 2021 10:45:45 UTC |
	|         | default-k8s-different-port-20211231103230-6736    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:45 UTC | Fri, 31 Dec 2021 10:46:05 UTC |
	|         | default-k8s-different-port-20211231103230-6736    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:46:05 UTC | Fri, 31 Dec 2021 10:46:05 UTC |
	|         | default-k8s-different-port-20211231103230-6736    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:48:11 UTC | Fri, 31 Dec 2021 10:48:12 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:50:55 UTC | Fri, 31 Dec 2021 10:50:56 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:53:21 UTC | Fri, 31 Dec 2021 10:53:22 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/31 10:46:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.17.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1231 10:46:05.967243  259177 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:46:05.967352  259177 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:46:05.967357  259177 out.go:310] Setting ErrFile to fd 2...
	I1231 10:46:05.967361  259177 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:46:05.967469  259177 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:46:05.967735  259177 out.go:304] Setting JSON to false
	I1231 10:46:05.969104  259177 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5320,"bootTime":1640942245,"procs":269,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:46:05.969197  259177 start.go:122] virtualization: kvm guest
	I1231 10:46:05.973853  259177 out.go:176] * [default-k8s-different-port-20211231103230-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:46:05.974255  259177 notify.go:174] Checking for updates...
	I1231 10:46:05.977868  259177 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:46:05.981223  259177 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:46:05.983862  259177 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:05.986549  259177 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:46:05.989061  259177 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:46:05.990312  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:05.991362  259177 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:46:06.042846  259177 docker.go:132] docker version: linux-20.10.12
	I1231 10:46:06.042944  259177 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:46:06.151292  259177 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:46:06.079978563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:46:06.151415  259177 docker.go:237] overlay module found
	I1231 10:46:06.155844  259177 out.go:176] * Using the docker driver based on existing profile
	I1231 10:46:06.155884  259177 start.go:280] selected driver: docker
	I1231 10:46:06.155890  259177 start.go:795] validating driver "docker" against &{Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6
736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:tru
e default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:06.155998  259177 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:46:06.156009  259177 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:46:06.156018  259177 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:46:06.156049  259177 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:46:06.156072  259177 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:46:06.158635  259177 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:46:06.159406  259177 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:46:06.263549  259177 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:46:06.195249559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	W1231 10:46:06.263707  259177 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:46:06.263738  259177 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:46:06.266310  259177 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:46:06.266476  259177 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1231 10:46:06.266517  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:06.266529  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:06.266555  259177 start_flags.go:298] config:
	{Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] Star
tHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:06.269292  259177 out.go:176] * Starting control plane node default-k8s-different-port-20211231103230-6736 in cluster default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.269341  259177 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:46:06.271330  259177 out.go:176] * Pulling base image ...
	I1231 10:46:06.271382  259177 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:46:06.271427  259177 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4
	I1231 10:46:06.271440  259177 cache.go:57] Caching tarball of preloaded images
	I1231 10:46:06.271488  259177 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:46:06.271687  259177 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:46:06.271703  259177 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.1 on containerd
	I1231 10:46:06.271838  259177 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/config.json ...
	I1231 10:46:06.311882  259177 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:46:06.311924  259177 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:46:06.311934  259177 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:46:06.311964  259177 start.go:313] acquiring machines lock for default-k8s-different-port-20211231103230-6736: {Name:mk03f3b7e941bf8396158e471587c1c8924400ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:46:06.312088  259177 start.go:317] acquired machines lock for "default-k8s-different-port-20211231103230-6736" in 90µs
	I1231 10:46:06.312120  259177 start.go:93] Skipping create...Using existing machine configuration
	I1231 10:46:06.312127  259177 fix.go:55] fixHost starting: 
	I1231 10:46:06.312392  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:06.348944  259177 fix.go:108] recreateIfNeeded on default-k8s-different-port-20211231103230-6736: state=Stopped err=<nil>
	W1231 10:46:06.348985  259177 fix.go:134] unexpected machine state, will restart: <nil>
	I1231 10:46:04.629364  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:06.630805  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:06.352123  259177 out.go:176] * Restarting existing docker container for "default-k8s-different-port-20211231103230-6736" ...
	I1231 10:46:06.352210  259177 cli_runner.go:133] Run: docker start default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.813658  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:06.855625  259177 kic.go:420] container "default-k8s-different-port-20211231103230-6736" state is running.
	I1231 10:46:06.856072  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.893499  259177 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/config.json ...
	I1231 10:46:06.893751  259177 machine.go:88] provisioning docker machine ...
	I1231 10:46:06.893775  259177 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:06.893829  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.942482  259177 main.go:130] libmachine: Using SSH client type: native
	I1231 10:46:06.942732  259177 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I1231 10:46:06.942761  259177 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20211231103230-6736 && echo "default-k8s-different-port-20211231103230-6736" | sudo tee /etc/hostname
	I1231 10:46:06.944176  259177 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49220->127.0.0.1:49432: read: connection reset by peer
	I1231 10:46:10.090471  259177 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20211231103230-6736
	
	I1231 10:46:10.090566  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.126309  259177 main.go:130] libmachine: Using SSH client type: native
	I1231 10:46:10.126479  259177 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I1231 10:46:10.126500  259177 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20211231103230-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20211231103230-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20211231103230-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:46:10.265308  259177 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:46:10.265342  259177 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:46:10.265381  259177 ubuntu.go:177] setting up certificates
	I1231 10:46:10.265390  259177 provision.go:83] configureAuth start
	I1231 10:46:10.265438  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.304699  259177 provision.go:138] copyHostCerts
	I1231 10:46:10.304774  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:46:10.304797  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:46:10.304931  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:46:10.305111  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:46:10.305135  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:46:10.305188  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:46:10.305275  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:46:10.305288  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:46:10.305319  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:46:10.305369  259177 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20211231103230-6736 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20211231103230-6736]
	I1231 10:46:10.388915  259177 provision.go:172] copyRemoteCerts
	I1231 10:46:10.388990  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:46:10.389022  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.428508  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.524630  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:46:10.546323  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I1231 10:46:10.568067  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1231 10:46:10.589042  259177 provision.go:86] duration metric: configureAuth took 323.641763ms
	I1231 10:46:10.589074  259177 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:46:10.589309  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:10.589325  259177 machine.go:91] provisioned docker machine in 3.69555799s
	I1231 10:46:10.589336  259177 start.go:267] post-start starting for "default-k8s-different-port-20211231103230-6736" (driver="docker")
	I1231 10:46:10.589345  259177 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:46:10.589389  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:46:10.589435  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.626814  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.729130  259177 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:46:10.732760  259177 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:46:10.732791  259177 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:46:10.732799  259177 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:46:10.732804  259177 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:46:10.732813  259177 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:46:10.732873  259177 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:46:10.732955  259177 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:46:10.733065  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:46:10.741203  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:46:10.767998  259177 start.go:270] post-start completed in 178.644414ms
	I1231 10:46:10.768098  259177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:46:10.768143  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.814628  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:09.129448  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:11.130044  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:10.913687  259177 fix.go:57] fixHost completed within 4.601551226s
	I1231 10:46:10.913730  259177 start.go:80] releasing machines lock for "default-k8s-different-port-20211231103230-6736", held for 4.601625482s
	I1231 10:46:10.913830  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.952465  259177 ssh_runner.go:195] Run: systemctl --version
	I1231 10:46:10.952474  259177 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:46:10.952512  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.952544  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.995326  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.996790  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:11.093213  259177 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:46:11.117553  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:46:11.129631  259177 docker.go:158] disabling docker service ...
	I1231 10:46:11.129684  259177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:46:11.141035  259177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:46:11.151527  259177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:46:11.242410  259177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:46:11.332381  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:46:11.343726  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:46:11.360330  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:46:11.377258  259177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:46:11.386307  259177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:46:11.395035  259177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:46:11.484118  259177 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:46:11.570327  259177 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:46:11.570400  259177 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:46:11.575057  259177 start.go:458] Will wait 60s for crictl version
	I1231 10:46:11.575130  259177 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:46:11.606505  259177 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:46:11Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:46:13.130219  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:15.629571  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.654252  259177 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:46:22.679099  259177 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:46:22.679173  259177 ssh_runner.go:195] Run: containerd --version
	I1231 10:46:22.699642  259177 ssh_runner.go:195] Run: containerd --version
	I1231 10:46:22.724111  259177 out.go:176] * Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	I1231 10:46:22.724192  259177 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20211231103230-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:46:22.761739  259177 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1231 10:46:22.767723  259177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:46:18.129788  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:20.629221  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.629685  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.782449  259177 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:46:22.785523  259177 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:46:22.788150  259177 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:46:22.788429  259177 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:46:22.788579  259177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:46:22.817871  259177 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:46:22.817900  259177 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:46:22.817954  259177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:46:22.845121  259177 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:46:22.845143  259177 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:46:22.845184  259177 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:46:22.872122  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:22.872151  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:22.872177  259177 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:46:22.872197  259177 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.23.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20211231103230-6736 NodeName:default-k8s-different-port-20211231103230-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:46:22.872494  259177 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20211231103230-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:46:22.872594  259177 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=default-k8s-different-port-20211231103230-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1231 10:46:22.872650  259177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.1
	I1231 10:46:22.880952  259177 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:46:22.881023  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:46:22.889603  259177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (653 bytes)
	I1231 10:46:22.904645  259177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1231 10:46:22.919901  259177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1231 10:46:22.934205  259177 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:46:22.937824  259177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:46:22.948914  259177 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736 for IP: 192.168.67.2
	I1231 10:46:22.949067  259177 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:46:22.949116  259177 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:46:22.949182  259177 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/client.key
	I1231 10:46:22.949259  259177 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key.c7fa3a9e
	I1231 10:46:22.949302  259177 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.key
	I1231 10:46:22.949461  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:46:22.949511  259177 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:46:22.949519  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:46:22.949548  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:46:22.949577  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:46:22.949600  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:46:22.949638  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:46:22.950592  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:46:22.971400  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1231 10:46:22.991752  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:46:23.013577  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1231 10:46:23.034846  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:46:23.055398  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:46:23.077052  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:46:23.096743  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:46:23.116982  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:46:23.138029  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:46:23.158888  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:46:23.180681  259177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:46:23.196969  259177 ssh_runner.go:195] Run: openssl version
	I1231 10:46:23.202414  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:46:23.211307  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.215577  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.215648  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.221379  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:46:23.230004  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:46:23.240407  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.245087  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.245149  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.252376  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:46:23.261273  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:46:23.272187  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.276381  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.276439  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.282440  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:46:23.291931  259177 kubeadm.go:388] StartCluster: {Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true ex
tra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:23.292071  259177 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:46:23.292143  259177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:46:23.322595  259177 cri.go:87] found id: "2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29"
	I1231 10:46:23.322627  259177 cri.go:87] found id: "bd73d75d2e911560016e85d3108966fc37ee8b4b3657740aad59a2ef691ec869"
	I1231 10:46:23.322635  259177 cri.go:87] found id: "6f1fab877ff5d05fc062729318a8cb0acd49484779378aa54df1414cc54b604e"
	I1231 10:46:23.322641  259177 cri.go:87] found id: "631b3be24dd2b7667fa37162bef86df2c43e5cf7b5aac17096ee8f79bf749549"
	I1231 10:46:23.322646  259177 cri.go:87] found id: "a578cbf12a8e43444eec092845ea14c9a2f682965d287a9801a49dd0a7a8f11f"
	I1231 10:46:23.322652  259177 cri.go:87] found id: "78f1ab230e90175601afd355c7e642dc2f7730a2c82d7591eb54c79c0cd8fdcb"
	I1231 10:46:23.322658  259177 cri.go:87] found id: ""
	I1231 10:46:23.322704  259177 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1231 10:46:23.340383  259177 cri.go:114] JSON = null
	W1231 10:46:23.340448  259177 kubeadm.go:395] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I1231 10:46:23.340504  259177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:46:23.349236  259177 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1231 10:46:23.356926  259177 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.357908  259177 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20211231103230-6736" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:23.358387  259177 kubeconfig.go:127] "default-k8s-different-port-20211231103230-6736" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig - will repair!
	I1231 10:46:23.359199  259177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:23.361565  259177 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1231 10:46:23.369806  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.369877  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.387797  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.588254  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.588346  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.603506  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.788716  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.788802  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.805281  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.988608  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.988691  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.003485  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.188825  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.188911  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.204147  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.388346  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.388445  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.405511  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.587900  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.587979  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.603393  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.788613  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.788703  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.804658  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.988878  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.988986  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.004053  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.188400  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.188499  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.203668  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.388884  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.388958  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.404633  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.588928  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.588999  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.603675  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.787949  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.788030  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.803421  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.630080  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:27.131198  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:25.988170  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.988259  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.003489  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.188799  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.188874  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.206241  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.388554  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.388631  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.404946  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.404976  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.405015  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.421640  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W1231 10:46:26.421666  259177 kubeadm.go:600] needs reconfigure: apiserver error: timed out waiting for the condition
	I1231 10:46:26.421696  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1231 10:46:27.164679  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:46:27.176052  259177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:46:27.184966  259177 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:46:27.185023  259177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:46:27.192809  259177 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:46:27.192859  259177 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:46:29.629674  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:31.629826  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:33.630236  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:35.630628  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:41.573231  259177 out.go:203]   - Generating certificates and keys ...
	I1231 10:46:41.577004  259177 out.go:203]   - Booting up control plane ...
	I1231 10:46:41.580458  259177 out.go:203]   - Configuring RBAC rules ...
	I1231 10:46:41.582339  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:41.582359  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:38.131557  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:40.628931  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:42.629291  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:41.584853  259177 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:46:41.584972  259177 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:46:41.589132  259177 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.1/kubectl ...
	I1231 10:46:41.589160  259177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:46:41.603970  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:46:42.303042  259177 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:46:42.303113  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.303143  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=default-k8s-different-port-20211231103230-6736 minikube.k8s.io/updated_at=2021_12_31T10_46_42_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.323470  259177 ops.go:34] apiserver oom_adj: -16
	I1231 10:46:42.425794  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.997478  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:43.497994  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:43.997336  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.497461  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.997469  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:45.497141  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.629588  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:46.630233  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:45.996911  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:46.497554  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:46.996878  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:47.497472  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:47.997537  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:48.496954  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:48.997559  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.497163  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.997251  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:50.496922  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.129073  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:51.129976  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:50.997663  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:51.497167  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:51.996864  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:52.497527  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:52.997090  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:53.497799  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:53.997130  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:54.105111  259177 kubeadm.go:864] duration metric: took 11.802054034s to wait for elevateKubeSystemPrivileges.
	I1231 10:46:54.105148  259177 kubeadm.go:390] StartCluster complete in 30.81326231s
	I1231 10:46:54.105171  259177 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:54.105289  259177 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:54.108579  259177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:54.630591  259177 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20211231103230-6736" rescaled to 1
	I1231 10:46:54.630781  259177 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:46:54.634169  259177 out.go:176] * Verifying Kubernetes components...
	I1231 10:46:54.631217  259177 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I1231 10:46:54.634291  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:46:54.634321  259177 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634357  259177 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634365  259177 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:46:54.634362  259177 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.631547  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:54.631838  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:46:54.634371  259177 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634395  259177 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634410  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634414  259177 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634427  259177 addons.go:165] addon metrics-server should already be in state true
	I1231 10:46:54.634489  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634516  259177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634487  259177 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634558  259177 addons.go:165] addon dashboard should already be in state true
	I1231 10:46:54.634593  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634970  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635065  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635066  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635106  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.685830  259177 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20211231103230-6736" to be "Ready" ...
	I1231 10:46:54.699606  259177 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.699638  259177 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:46:54.699668  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.700169  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.704286  259177 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:46:54.704439  259177 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:46:54.707989  259177 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1231 10:46:54.704464  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:46:54.708151  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.708200  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1231 10:46:54.708461  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1231 10:46:54.708547  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.719755  259177 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1231 10:46:54.722183  259177 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1231 10:46:54.722299  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1231 10:46:54.722320  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1231 10:46:54.722396  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.765928  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.766391  259177 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:46:54.766411  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:46:54.766457  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.767368  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.784353  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.795209  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:46:54.825887  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:55.079913  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1231 10:46:55.079940  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1231 10:46:55.081041  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:46:55.101407  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:46:55.101636  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1231 10:46:55.101669  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1231 10:46:55.182759  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1231 10:46:55.182791  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1231 10:46:55.192507  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1231 10:46:55.192539  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1231 10:46:55.210370  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:46:55.210394  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1231 10:46:55.280215  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1231 10:46:55.280281  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1231 10:46:55.292923  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:46:55.303427  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1231 10:46:55.303453  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1231 10:46:55.390564  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1231 10:46:55.390603  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1231 10:46:55.488578  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1231 10:46:55.488607  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1231 10:46:55.493706  259177 start.go:773] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I1231 10:46:55.509144  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1231 10:46:55.509179  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1231 10:46:55.591504  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1231 10:46:55.591541  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1231 10:46:55.611957  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:46:55.611987  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1231 10:46:55.692647  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:46:56.280527  259177 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:56.694434  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:46:57.215361  259177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.522655734s)
	I1231 10:46:53.130645  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:55.628936  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:57.630008  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:57.218195  259177 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1231 10:46:57.218237  259177 addons.go:417] enableAddons completed in 2.587057511s
	I1231 10:46:59.194381  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:00.129798  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:02.629613  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:01.195147  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:03.195279  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:05.694625  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:05.129931  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:07.629077  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:08.193944  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:10.195307  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:09.629227  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:12.129630  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:12.695675  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:15.195343  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:14.629484  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:17.129904  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:17.693570  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:19.693815  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:19.629766  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:21.630502  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:21.694204  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:23.694541  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:25.694753  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:24.130302  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:26.629329  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:28.193820  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:30.694197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:28.630092  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:30.630190  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:32.694753  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:35.194657  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:33.129099  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:35.129944  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:37.629071  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:37.694883  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:40.194845  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:39.629460  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:41.629639  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:42.694778  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:45.194603  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:44.129618  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:46.628987  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:47.693566  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:49.694322  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:48.629810  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:51.129720  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:52.194896  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:54.694088  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:53.629970  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:55.630008  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:56.694160  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:58.694485  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:00.695020  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:58.129386  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:00.629184  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:02.630921  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:03.194853  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:05.195360  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:05.129367  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:07.629362  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:07.197249  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:09.693805  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:09.629727  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:11.132106  253675 node_ready.go:38] duration metric: took 4m0.023004433s waiting for node "embed-certs-20211231102953-6736" to be "Ready" ...
	I1231 10:48:11.135160  253675 out.go:176] 
	W1231 10:48:11.135314  253675 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:48:11.135327  253675 out.go:241] * 
	W1231 10:48:11.136204  253675 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1231 10:48:11.694445  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:13.694502  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:15.695650  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:18.193879  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:20.194982  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:22.694025  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:24.695099  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:26.695542  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:29.193781  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:31.196683  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:33.693756  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:35.694396  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:37.694612  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:40.194187  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:42.194624  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:44.694809  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:47.194396  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:49.194998  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:51.693920  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:54.194059  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:56.194946  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:58.694370  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:01.194666  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:03.693794  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:05.693979  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:07.694832  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:10.193727  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:12.196818  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:14.694785  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:17.194197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:19.195220  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:21.693956  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:24.193839  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:26.194434  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:28.694344  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:31.194654  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:33.694374  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:36.194938  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:38.693771  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:40.694132  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:42.694270  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:44.694407  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:47.194463  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:49.194582  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:51.694082  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:53.694472  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:56.194226  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:58.194871  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:00.694067  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:02.694491  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:04.695370  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:07.193813  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:09.194857  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:11.693897  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:13.694405  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:15.695280  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:18.194197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:20.694632  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:23.193998  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:25.194494  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:27.194919  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:29.693725  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:31.694052  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:34.194397  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:36.695239  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:39.194905  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:41.693646  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:43.694494  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:46.193735  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:48.194245  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:50.694846  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:53.194883  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:54.696282  259177 node_ready.go:38] duration metric: took 4m0.010315548s waiting for node "default-k8s-different-port-20211231103230-6736" to be "Ready" ...
	I1231 10:50:54.698863  259177 out.go:176] 
	W1231 10:50:54.699054  259177 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:50:54.699075  259177 out.go:241] * 
	W1231 10:50:54.699945  259177 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	1006b19bf3eaa       6de166512aa22       58 seconds ago      Running             kindnet-cni               4                   b2ae6303a3d29
	a22d65cfe95fb       6de166512aa22       4 minutes ago       Exited              kindnet-cni               3                   b2ae6303a3d29
	62c0d868eb022       b46c42588d511       13 minutes ago      Running             kube-proxy                0                   7d87514afca2c
	8896082530359       71d575efe6283       13 minutes ago      Running             kube-scheduler            1                   caee512e50be5
	bedf4fc421a5b       f51846a4fd288       13 minutes ago      Running             kube-controller-manager   1                   23a53efe4f57a
	5e4fcc10f62c1       b6d7abedde399       13 minutes ago      Running             kube-apiserver            1                   5526833e62549
	e2c98b3c8c237       25f8c7f3da61c       13 minutes ago      Running             etcd                      1                   9e5cd803bf1ec
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-12-31 10:43:23 UTC, end at Fri 2021-12-31 10:57:15 UTC. --
	Dec 31 10:49:47 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:49:47.639348920Z" level=info msg="CreateContainer within sandbox \"b2ae6303a3d29623c4e30b092f55ba5537366b31656055b31b8b79a12dbe39ff\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"e38fb8dda00d5cbad49d428cd05de5b7364d7407838036ba462559cd2ba25d3c\""
	Dec 31 10:49:47 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:49:47.639850982Z" level=info msg="StartContainer for \"e38fb8dda00d5cbad49d428cd05de5b7364d7407838036ba462559cd2ba25d3c\""
	Dec 31 10:49:47 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:49:47.799701873Z" level=info msg="StartContainer for \"e38fb8dda00d5cbad49d428cd05de5b7364d7407838036ba462559cd2ba25d3c\" returns successfully"
	Dec 31 10:52:28 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:52:28.100765596Z" level=info msg="Finish piping stderr of container \"e38fb8dda00d5cbad49d428cd05de5b7364d7407838036ba462559cd2ba25d3c\""
	Dec 31 10:52:28 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:52:28.100790899Z" level=info msg="Finish piping stdout of container \"e38fb8dda00d5cbad49d428cd05de5b7364d7407838036ba462559cd2ba25d3c\""
	Dec 31 10:52:28 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:52:28.101590879Z" level=info msg="TaskExit event &TaskExit{ContainerID:e38fb8dda00d5cbad49d428cd05de5b7364d7407838036ba462559cd2ba25d3c,ID:e38fb8dda00d5cbad49d428cd05de5b7364d7407838036ba462559cd2ba25d3c,Pid:2645,ExitStatus:2,ExitedAt:2021-12-31 10:52:28.101289545 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:52:28 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:52:28.128941128Z" level=info msg="shim disconnected" id=e38fb8dda00d5cbad49d428cd05de5b7364d7407838036ba462559cd2ba25d3c
	Dec 31 10:52:28 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:52:28.129039721Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:52:28 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:52:28.721814882Z" level=info msg="RemoveContainer for \"b37d7178b58ae4192845c7c2d77ea5f32aac049f2de8ba2ecbf469e732d957ac\""
	Dec 31 10:52:28 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:52:28.727916590Z" level=info msg="RemoveContainer for \"b37d7178b58ae4192845c7c2d77ea5f32aac049f2de8ba2ecbf469e732d957ac\" returns successfully"
	Dec 31 10:52:55 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:52:55.619498278Z" level=info msg="CreateContainer within sandbox \"b2ae6303a3d29623c4e30b092f55ba5537366b31656055b31b8b79a12dbe39ff\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Dec 31 10:52:55 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:52:55.648258085Z" level=info msg="CreateContainer within sandbox \"b2ae6303a3d29623c4e30b092f55ba5537366b31656055b31b8b79a12dbe39ff\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"a22d65cfe95fb98ab13d15673eedb7448959fae3ae7c58673adb3b498caa501b\""
	Dec 31 10:52:55 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:52:55.648886527Z" level=info msg="StartContainer for \"a22d65cfe95fb98ab13d15673eedb7448959fae3ae7c58673adb3b498caa501b\""
	Dec 31 10:52:55 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:52:55.884938519Z" level=info msg="StartContainer for \"a22d65cfe95fb98ab13d15673eedb7448959fae3ae7c58673adb3b498caa501b\" returns successfully"
	Dec 31 10:55:36 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:55:36.105953929Z" level=info msg="Finish piping stderr of container \"a22d65cfe95fb98ab13d15673eedb7448959fae3ae7c58673adb3b498caa501b\""
	Dec 31 10:55:36 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:55:36.106051554Z" level=info msg="Finish piping stdout of container \"a22d65cfe95fb98ab13d15673eedb7448959fae3ae7c58673adb3b498caa501b\""
	Dec 31 10:55:36 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:55:36.106856118Z" level=info msg="TaskExit event &TaskExit{ContainerID:a22d65cfe95fb98ab13d15673eedb7448959fae3ae7c58673adb3b498caa501b,ID:a22d65cfe95fb98ab13d15673eedb7448959fae3ae7c58673adb3b498caa501b,Pid:2734,ExitStatus:2,ExitedAt:2021-12-31 10:55:36.106477994 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:55:36 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:55:36.136960915Z" level=info msg="shim disconnected" id=a22d65cfe95fb98ab13d15673eedb7448959fae3ae7c58673adb3b498caa501b
	Dec 31 10:55:36 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:55:36.137122875Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:55:37 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:55:37.084491105Z" level=info msg="RemoveContainer for \"e38fb8dda00d5cbad49d428cd05de5b7364d7407838036ba462559cd2ba25d3c\""
	Dec 31 10:55:37 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:55:37.091922311Z" level=info msg="RemoveContainer for \"e38fb8dda00d5cbad49d428cd05de5b7364d7407838036ba462559cd2ba25d3c\" returns successfully"
	Dec 31 10:56:16 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:56:16.614537012Z" level=info msg="CreateContainer within sandbox \"b2ae6303a3d29623c4e30b092f55ba5537366b31656055b31b8b79a12dbe39ff\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Dec 31 10:56:16 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:56:16.645342329Z" level=info msg="CreateContainer within sandbox \"b2ae6303a3d29623c4e30b092f55ba5537366b31656055b31b8b79a12dbe39ff\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"1006b19bf3eaa88576d99e16a677d7f66323bd650532f34914f74a1211968498\""
	Dec 31 10:56:16 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:56:16.646022333Z" level=info msg="StartContainer for \"1006b19bf3eaa88576d99e16a677d7f66323bd650532f34914f74a1211968498\""
	Dec 31 10:56:16 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:56:16.897928524Z" level=info msg="StartContainer for \"1006b19bf3eaa88576d99e16a677d7f66323bd650532f34914f74a1211968498\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20211231102953-6736
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20211231102953-6736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb
	                    minikube.k8s.io/name=embed-certs-20211231102953-6736
	                    minikube.k8s.io/updated_at=2021_12_31T10_43_59_0700
	                    minikube.k8s.io/version=v1.24.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 Dec 2021 10:43:55 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20211231102953-6736
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 Dec 2021 10:57:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 Dec 2021 10:54:26 +0000   Fri, 31 Dec 2021 10:43:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 Dec 2021 10:54:26 +0000   Fri, 31 Dec 2021 10:43:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 Dec 2021 10:54:26 +0000   Fri, 31 Dec 2021 10:43:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 31 Dec 2021 10:54:26 +0000   Fri, 31 Dec 2021 10:43:52 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20211231102953-6736
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	  System UUID:                df6948c7-cd35-4573-a0b7-f7c0ae501659
	  Boot ID:                    e6f4bca3-a61e-4e63-a68d-0c73a5c2c999
	  Kernel Version:             5.11.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.12
	  Kubelet Version:            v1.23.1
	  Kube-Proxy Version:         v1.23.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20211231102953-6736                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-sz9gt                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                 kube-apiserver-embed-certs-20211231102953-6736             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-embed-certs-20211231102953-6736    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-bf6l7                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-embed-certs-20211231102953-6736             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 13m                kube-proxy  
	  Normal  Starting                 13m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x4 over 13m)  kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x4 over 13m)  kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x3 over 13m)  kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 da bc c6 d3 d8 08 06
	[  +0.905080] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vetha337dd59
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 d5 c5 d8 9b b7 08 06
	[  +0.767220] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethad0df3e8
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 45 74 46 b2 ed 08 06
	[  +0.175235] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethfa30cc3c
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 f5 8d 27 39 52 08 06
	[  +2.267937] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.022406] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.024047] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.951751] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.011878] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023951] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.963808] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.003832] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023918] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	
	* 
	* ==> etcd [e2c98b3c8c23748a45dcacedd39e95616ad8442e36bb6b6fda207f3c9cd41381] <==
	* {"level":"info","ts":"2021-12-31T10:43:52.009Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-12-31T10:43:52.010Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2021-12-31T10:43:52.010Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2021-12-31T10:43:52.010Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-12-31T10:43:52.010Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-12-31T10:43:52.380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2021-12-31T10:43:52.380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2021-12-31T10:43:52.380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2021-12-31T10:43:52.380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2021-12-31T10:43:52.380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2021-12-31T10:43:52.380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2021-12-31T10:43:52.380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2021-12-31T10:43:52.381Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:embed-certs-20211231102953-6736 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-12-31T10:43:52.383Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2021-12-31T10:43:52.384Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-12-31T10:53:53.198Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":662}
	{"level":"info","ts":"2021-12-31T10:53:53.199Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":662,"took":"652.758µs"}
	
	* 
	* ==> kernel <==
	*  10:57:15 up  1:39,  0 users,  load average: 0.30, 0.51, 1.09
	Linux embed-certs-20211231102953-6736 5.11.0-1023-gcp #25~20.04.1-Ubuntu SMP Mon Nov 15 15:54:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [5e4fcc10f62c1036c54aadc249c4bd994a626fae29d5142d5ec3303290197b95] <==
	* I1231 10:47:13.610967       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 10:48:56.104904       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 10:48:56.104997       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:48:56.105009       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 10:49:56.105593       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 10:49:56.105655       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:49:56.105663       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 10:51:56.105877       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 10:51:56.105977       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:51:56.105995       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 10:53:56.111791       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 10:53:56.111879       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:53:56.111887       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 10:54:56.112744       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 10:54:56.112835       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:54:56.112849       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 10:56:56.114043       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 10:56:56.114128       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:56:56.114136       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [bedf4fc421a5b6eb6f42473129bdc4ccad70191cd113ec514f00efa708d19047] <==
	* W1231 10:51:10.941727       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:51:40.503289       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:51:40.958977       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:52:10.515112       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:52:10.977699       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:52:40.527588       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:52:40.995894       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:53:10.539417       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:53:11.016163       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:53:40.549752       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:53:41.030771       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:54:10.560705       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:54:11.048038       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:54:40.599752       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:54:41.064498       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:55:10.625002       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:55:11.079786       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:55:40.639016       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:55:41.099202       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:56:10.647531       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:56:11.116759       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:56:40.670312       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:56:41.135445       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:57:10.683052       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:57:11.152385       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [62c0d868eb022292299288a2d75ff0a1b7915bda2773f4a9103f725d6f43f491] <==
	* I1231 10:44:12.607400       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I1231 10:44:12.607485       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I1231 10:44:12.607591       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1231 10:44:12.801716       1 server_others.go:206] "Using iptables Proxier"
	I1231 10:44:12.802050       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1231 10:44:12.802136       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1231 10:44:12.802156       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1231 10:44:12.802606       1 server.go:656] "Version info" version="v1.23.1"
	I1231 10:44:12.803335       1 config.go:317] "Starting service config controller"
	I1231 10:44:12.803356       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1231 10:44:12.803492       1 config.go:226] "Starting endpoint slice config controller"
	I1231 10:44:12.803697       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1231 10:44:12.904704       1 shared_informer.go:247] Caches are synced for service config 
	I1231 10:44:12.907142       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [889608253035982887397c394e7ec41a7768efbf6a0e85f40e25bcc483a2df07] <==
	* W1231 10:43:55.191217       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1231 10:43:55.191299       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1231 10:43:55.193121       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1231 10:43:55.193371       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1231 10:43:55.193600       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:43:55.193679       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1231 10:43:55.194203       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:43:55.194299       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1231 10:43:55.194319       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1231 10:43:55.194336       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1231 10:43:55.194512       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:43:55.194595       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1231 10:43:55.194973       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:43:55.195211       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1231 10:43:55.195010       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1231 10:43:55.195470       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1231 10:43:56.133167       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1231 10:43:56.133248       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1231 10:43:56.183335       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1231 10:43:56.183397       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1231 10:43:56.356344       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1231 10:43:56.356384       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1231 10:43:56.381076       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:43:56.381112       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1231 10:43:59.289785       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-12-31 10:43:23 UTC, end at Fri 2021-12-31 10:57:15 UTC. --
	Dec 31 10:55:38 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:55:38.966089    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:55:43 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:55:43.967691    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:55:48 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:55:48.968582    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:55:52 embed-certs-20211231102953-6736 kubelet[1376]: I1231 10:55:52.611814    1376 scope.go:110] "RemoveContainer" containerID="a22d65cfe95fb98ab13d15673eedb7448959fae3ae7c58673adb3b498caa501b"
	Dec 31 10:55:52 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:55:52.612135    1376 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-sz9gt_kube-system(a4e2bd9c-691b-43fc-99f8-6e269c1c58ea)\"" pod="kube-system/kindnet-sz9gt" podUID=a4e2bd9c-691b-43fc-99f8-6e269c1c58ea
	Dec 31 10:55:53 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:55:53.969878    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:55:58 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:55:58.970569    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:56:03 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:56:03.971794    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:56:04 embed-certs-20211231102953-6736 kubelet[1376]: I1231 10:56:04.611639    1376 scope.go:110] "RemoveContainer" containerID="a22d65cfe95fb98ab13d15673eedb7448959fae3ae7c58673adb3b498caa501b"
	Dec 31 10:56:04 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:56:04.611918    1376 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-sz9gt_kube-system(a4e2bd9c-691b-43fc-99f8-6e269c1c58ea)\"" pod="kube-system/kindnet-sz9gt" podUID=a4e2bd9c-691b-43fc-99f8-6e269c1c58ea
	Dec 31 10:56:08 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:56:08.973422    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:56:13 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:56:13.975030    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:56:16 embed-certs-20211231102953-6736 kubelet[1376]: I1231 10:56:16.611867    1376 scope.go:110] "RemoveContainer" containerID="a22d65cfe95fb98ab13d15673eedb7448959fae3ae7c58673adb3b498caa501b"
	Dec 31 10:56:18 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:56:18.975843    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:56:23 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:56:23.977390    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:56:28 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:56:28.978595    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:56:33 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:56:33.979828    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:56:38 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:56:38.980730    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:56:43 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:56:43.982568    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:56:48 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:56:48.984203    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:56:53 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:56:53.985591    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:56:58 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:56:58.986770    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:57:03 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:57:03.988635    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:57:08 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:57:08.989803    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:57:13 embed-certs-20211231102953-6736 kubelet[1376]: E1231 10:57:13.991366    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20211231102953-6736 -n embed-certs-20211231102953-6736

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:262: (dbg) Run:  kubectl --context embed-certs-20211231102953-6736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-64897985d-4t6g6 metrics-server-7f49dcbd7-fwqnh storage-provisioner dashboard-metrics-scraper-56974995fc-8ctl7 kubernetes-dashboard-ccd587f44-dwbwm
helpers_test.go:273: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context embed-certs-20211231102953-6736 describe pod coredns-64897985d-4t6g6 metrics-server-7f49dcbd7-fwqnh storage-provisioner dashboard-metrics-scraper-56974995fc-8ctl7 kubernetes-dashboard-ccd587f44-dwbwm
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20211231102953-6736 describe pod coredns-64897985d-4t6g6 metrics-server-7f49dcbd7-fwqnh storage-provisioner dashboard-metrics-scraper-56974995fc-8ctl7 kubernetes-dashboard-ccd587f44-dwbwm: exit status 1 (86.663976ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-4t6g6" not found
	Error from server (NotFound): pods "metrics-server-7f49dcbd7-fwqnh" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-8ctl7" not found
	Error from server (NotFound): pods "kubernetes-dashboard-ccd587f44-dwbwm" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context embed-certs-20211231102953-6736 describe pod coredns-64897985d-4t6g6 metrics-server-7f49dcbd7-fwqnh storage-provisioner dashboard-metrics-scraper-56974995fc-8ctl7 kubernetes-dashboard-ccd587f44-dwbwm: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (543.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-gqg75" [4d8537b1-da9d-42ab-a91b-3ef7858d6421] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
E1231 10:51:13.450313    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory
E1231 10:51:59.313041    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory
E1231 10:52:06.991689    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory
E1231 10:52:36.495919    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory
E1231 10:52:39.268959    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: ***** TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:259: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20211231103230-6736 -n default-k8s-different-port-20211231103230-6736

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2021-12-31 10:59:57.715942142 +0000 UTC m=+4696.697758073
start_stop_delete_test.go:259: (dbg) Run:  kubectl --context default-k8s-different-port-20211231103230-6736 describe po kubernetes-dashboard-ccd587f44-gqg75 -n kubernetes-dashboard
start_stop_delete_test.go:259: (dbg) kubectl --context default-k8s-different-port-20211231103230-6736 describe po kubernetes-dashboard-ccd587f44-gqg75 -n kubernetes-dashboard:
Name:           kubernetes-dashboard-ccd587f44-gqg75
Namespace:      kubernetes-dashboard
Priority:       0
Node:           <none>
Labels:         gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=ccd587f44
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/kubernetes-dashboard-ccd587f44
Containers:
kubernetes-dashboard:
Image:      kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e
Port:       9090/TCP
Host Port:  0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
Liveness:     http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:  <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g9gdd (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-g9gdd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              beta.kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                 From               Message
----     ------            ----                ----               -------
Warning  FailedScheduling  15s (x13 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
start_stop_delete_test.go:259: (dbg) Run:  kubectl --context default-k8s-different-port-20211231103230-6736 logs kubernetes-dashboard-ccd587f44-gqg75 -n kubernetes-dashboard
start_stop_delete_test.go:259: (dbg) kubectl --context default-k8s-different-port-20211231103230-6736 logs kubernetes-dashboard-ccd587f44-gqg75 -n kubernetes-dashboard:
start_stop_delete_test.go:260: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect default-k8s-different-port-20211231103230-6736
helpers_test.go:236: (dbg) docker inspect default-k8s-different-port-20211231103230-6736:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1",
	        "Created": "2021-12-31T10:32:50.365330019Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 259442,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-12-31T10:46:06.801649957Z",
	            "FinishedAt": "2021-12-31T10:46:05.341862453Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/hostname",
	        "HostsPath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/hosts",
	        "LogPath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1-json.log",
	        "Name": "/default-k8s-different-port-20211231103230-6736",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20211231103230-6736:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20211231103230-6736",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5-init/diff:/var/lib/docker/overlay2/c534a67c1f4c5b9f1d9e8fda7874029a0b48633454e02a0b6c810aa9cf70f36b/diff:/var/lib/docker/overlay2/de7a9fc119b7ade7c9f3b24bea5108d56a2268af67d230db0cccbbf8190cc502/diff:/var/lib/docker/overlay2/7dde3b10db81d42c3216201bd8676123e0e230bfa173b31e053af8fa206a800e/diff:/var/lib/docker/overlay2/97454ec819da74666dcc72f8e5b4653886c44f3699cecf13a018ecf95d935273/diff:/var/lib/docker/overlay2/f63f64cf22a9591cbcafeb284a265882ea890544f2b083b747047bcf47a45f11/diff:/var/lib/docker/overlay2/85a2123491b64d14f7493b3bc6a519d9df28cc3680976b9e5a7ecf783be5a304/diff:/var/lib/docker/overlay2/2ec15009b968007c3be12325edca4c12821b52fc616ace66d456038c7f9b8b26/diff:/var/lib/docker/overlay2/c45e9285ae7064da0be5f07d29b035ad54f64e529db00ffe6f2dce6ad543dccf/diff:/var/lib/docker/overlay2/d95000893ca52351a7804b110df43c1656b4bc7fbe4940c9e0ff2744233b1f93/diff:/var/lib/docker/overlay2/38e3bf
07bc49f88e1e7214644d31095a6178f5156c6a5ff3fbcaf8676336a52d/diff:/var/lib/docker/overlay2/fe45a9a5036ea98542c5fd7808e66f8b6ffe9d0c545de700f7cbe39d6280e0e3/diff:/var/lib/docker/overlay2/8cd90b043b30c66cbeacbe4ca21881d85717c6e16f1fe627e2eecdeaf461da6c/diff:/var/lib/docker/overlay2/7221b66a068ae65d9a1c089fe5c60e7230aa950b3d47eb7e90a1776c2a45931b/diff:/var/lib/docker/overlay2/86ce10ecfb90cfd8e1f7d722715353b823ddfa58b93eefa2c1efffff899916ee/diff:/var/lib/docker/overlay2/12ea1316fade88c6e1d2272d6dbeb629875ff96f4dd6f6ead0ff6a4e7e3a3067/diff:/var/lib/docker/overlay2/0fe276873166ac11302901f1bb2c547c71e23fac2197e717e789b6ce7d1eb3ce/diff:/var/lib/docker/overlay2/81bd863acb94ff576a1f1f046ee87693930a12209e915a09f2a720c77d1357f3/diff:/var/lib/docker/overlay2/8875d87537d51e137e2fbff41fe61dadeefa46057e0c93d257ec4754b5ba95f3/diff:/var/lib/docker/overlay2/86e75ae4c708ad2e78d21326ca3b7f552b7903d8c7a0a66e086feaacd3a8c002/diff:/var/lib/docker/overlay2/c688ba4459332409ab260296b96d566d186d307b81308fb3b59724fe6fed3c7d/diff:/var/lib/d
ocker/overlay2/45c4b946a887c9de4f592591b3e65041083baba75d5bed9b0995339de1cafd02/diff:/var/lib/docker/overlay2/3927af3e4dc5e4af94ac76cbe5a3b9759b80e10646aabc655f5c9b56ae13e650/diff:/var/lib/docker/overlay2/11dd3efafe062edcbd4dd837545d570514492786b08abf77f57fe300d225eaa5/diff:/var/lib/docker/overlay2/b8e0f4d5369eb0c633092d9e21501d99ccb27fafaad6667209e1d988c337ae9b/diff:/var/lib/docker/overlay2/c605476b22b48b266163ca6ee5fcb52b15458f1d7a77458390109b6e795f72f3/diff:/var/lib/docker/overlay2/19b31873fbc8858f6b143ea8dc98572f32110c70c29732285bdea89dd6163287/diff:/var/lib/docker/overlay2/171afcaaccc199dc3a653d1bf42453959e7c6bd7bc1e8340f94454b5f5fb6d2d/diff:/var/lib/docker/overlay2/2000b0fa8d671b6e34ec28228440eb7e6a5ddcd6b2cb657836d8522e3fb4454e/diff:/var/lib/docker/overlay2/1294bf56384681b45b611fe07d7b32c6c87a67eee568452d1f8123bf319f7a4c/diff:/var/lib/docker/overlay2/3199bb762e90968c1998250099f781c0e8954f4d62df39a80a384b9208149421/diff:/var/lib/docker/overlay2/036a54cd701c84ee783285b30f6cf33b92858f1a11ebd5db15cffffea12
adcd9/diff:/var/lib/docker/overlay2/552a166f9d1f0d79a7c45c08db35c6db9366e5d8e1321b62ffc3d8a9ea458797/diff:/var/lib/docker/overlay2/98f34e7af675026bd6dd9aaf43b996b5a992e04fe33f238d16d38e7ccad2fb91/diff:/var/lib/docker/overlay2/c1ed97ca4514bd37713c82daa6ba805285f12b5aa51599e9f915b8755fbf55a7/diff:/var/lib/docker/overlay2/86b8ba00550442fc509be298ff07205e99a171d56a70a2e1735cdfd4a9ab745e/diff:/var/lib/docker/overlay2/109092c1424dfee38a4eb63e39a9bb5d3de4b011f84e30bd021bf73ee2f47aa1/diff:/var/lib/docker/overlay2/87fd60ac9e5dd5a701def2c0a9e1c64463bd4779be06a990c86e5561a4349312/diff:/var/lib/docker/overlay2/145545da7fd578da8a41adec4cec3a9b347b4e609c900ee9365ae7735c7859c6/diff:/var/lib/docker/overlay2/e6a20d33de71269404b7fdc1fa6f915dcbc5a337e0625ef4536cb942bb55c19b/diff:/var/lib/docker/overlay2/38829872ccac03d0dae45b93d1f34e61fd179b6fc19f22fd27d4431ee773090f/diff:/var/lib/docker/overlay2/3c7fa7cc3994b36d6b984ebe7c46b535829f02814130486c4ecf5a0976f4f7dc/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20211231103230-6736",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20211231103230-6736/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20211231103230-6736",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20211231103230-6736",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20211231103230-6736",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c51c239834c0a79db122933a66bc297c5e82f8810e0ea189de2970c0af2302b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49432"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49431"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49428"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49430"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49429"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1c51c239834c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20211231103230-6736": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "282fb8467680",
	                        "default-k8s-different-port-20211231103230-6736"
	                    ],
	                    "NetworkID": "e1788769ca7736a71ee22c1f2c56bcd2d9ff496f9d3c2faac492c32b43c45e2f",
	                    "EndpointID": "b9fb2147bd35b928ce091697818409020532d192f5d386f126eee3cf42c8c85a",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20211231103230-6736 -n default-k8s-different-port-20211231103230-6736
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20211231103230-6736 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-different-port-20211231103230-6736 logs -n 25: (1.344773242s)
helpers_test.go:253: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:53 UTC | Fri, 31 Dec 2021 10:34:54 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| unpause | -p                                                | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:54 UTC | Fri, 31 Dec 2021 10:34:55 UTC |
	|         | newest-cni-20211231103230-6736                    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                |         |         |                               |                               |
	| delete  | -p                                                | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:56 UTC | Fri, 31 Dec 2021 10:34:59 UTC |
	|         | newest-cni-20211231103230-6736                    |                                                |         |         |                               |                               |
	| delete  | -p                                                | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:59 UTC | Fri, 31 Dec 2021 10:35:00 UTC |
	|         | newest-cni-20211231103230-6736                    |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:37:37 UTC | Fri, 31 Dec 2021 10:37:38 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:02 UTC | Fri, 31 Dec 2021 10:39:03 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:05 UTC | Fri, 31 Dec 2021 10:39:06 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:06 UTC | Fri, 31 Dec 2021 10:39:07 UTC |
	|         | old-k8s-version-20211231102602-6736               |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:07 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736               |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:28 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736               |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:57 UTC | Fri, 31 Dec 2021 10:42:58 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:59 UTC | Fri, 31 Dec 2021 10:43:00 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:01 UTC | Fri, 31 Dec 2021 10:43:02 UTC |
	|         | embed-certs-20211231102953-6736                   |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:02 UTC | Fri, 31 Dec 2021 10:43:22 UTC |
	|         | embed-certs-20211231102953-6736                   |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:22 UTC | Fri, 31 Dec 2021 10:43:22 UTC |
	|         | embed-certs-20211231102953-6736                   |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:44:18 UTC | Fri, 31 Dec 2021 10:44:19 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:40 UTC | Fri, 31 Dec 2021 10:45:41 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:42 UTC | Fri, 31 Dec 2021 10:45:43 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:44 UTC | Fri, 31 Dec 2021 10:45:45 UTC |
	|         | default-k8s-different-port-20211231103230-6736    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:45 UTC | Fri, 31 Dec 2021 10:46:05 UTC |
	|         | default-k8s-different-port-20211231103230-6736    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:46:05 UTC | Fri, 31 Dec 2021 10:46:05 UTC |
	|         | default-k8s-different-port-20211231103230-6736    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:48:11 UTC | Fri, 31 Dec 2021 10:48:12 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:50:55 UTC | Fri, 31 Dec 2021 10:50:56 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:53:21 UTC | Fri, 31 Dec 2021 10:53:22 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:57:14 UTC | Fri, 31 Dec 2021 10:57:15 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/31 10:46:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.17.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1231 10:46:05.967243  259177 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:46:05.967352  259177 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:46:05.967357  259177 out.go:310] Setting ErrFile to fd 2...
	I1231 10:46:05.967361  259177 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:46:05.967469  259177 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:46:05.967735  259177 out.go:304] Setting JSON to false
	I1231 10:46:05.969104  259177 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5320,"bootTime":1640942245,"procs":269,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:46:05.969197  259177 start.go:122] virtualization: kvm guest
	I1231 10:46:05.973853  259177 out.go:176] * [default-k8s-different-port-20211231103230-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:46:05.974255  259177 notify.go:174] Checking for updates...
	I1231 10:46:05.977868  259177 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:46:05.981223  259177 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:46:05.983862  259177 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:05.986549  259177 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:46:05.989061  259177 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:46:05.990312  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:05.991362  259177 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:46:06.042846  259177 docker.go:132] docker version: linux-20.10.12
	I1231 10:46:06.042944  259177 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:46:06.151292  259177 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:46:06.079978563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:46:06.151415  259177 docker.go:237] overlay module found
	I1231 10:46:06.155844  259177 out.go:176] * Using the docker driver based on existing profile
	I1231 10:46:06.155884  259177 start.go:280] selected driver: docker
	I1231 10:46:06.155890  259177 start.go:795] validating driver "docker" against &{Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6
736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:tru
e default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:06.155998  259177 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:46:06.156009  259177 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:46:06.156018  259177 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:46:06.156049  259177 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:46:06.156072  259177 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:46:06.158635  259177 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:46:06.159406  259177 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:46:06.263549  259177 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:46:06.195249559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	W1231 10:46:06.263707  259177 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:46:06.263738  259177 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:46:06.266310  259177 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:46:06.266476  259177 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1231 10:46:06.266517  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:06.266529  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:06.266555  259177 start_flags.go:298] config:
	{Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] Star
tHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:06.269292  259177 out.go:176] * Starting control plane node default-k8s-different-port-20211231103230-6736 in cluster default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.269341  259177 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:46:06.271330  259177 out.go:176] * Pulling base image ...
	I1231 10:46:06.271382  259177 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:46:06.271427  259177 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4
	I1231 10:46:06.271440  259177 cache.go:57] Caching tarball of preloaded images
	I1231 10:46:06.271488  259177 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:46:06.271687  259177 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:46:06.271703  259177 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.1 on containerd
	I1231 10:46:06.271838  259177 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/config.json ...
	I1231 10:46:06.311882  259177 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:46:06.311924  259177 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:46:06.311934  259177 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:46:06.311964  259177 start.go:313] acquiring machines lock for default-k8s-different-port-20211231103230-6736: {Name:mk03f3b7e941bf8396158e471587c1c8924400ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:46:06.312088  259177 start.go:317] acquired machines lock for "default-k8s-different-port-20211231103230-6736" in 90µs
	I1231 10:46:06.312120  259177 start.go:93] Skipping create...Using existing machine configuration
	I1231 10:46:06.312127  259177 fix.go:55] fixHost starting: 
	I1231 10:46:06.312392  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:06.348944  259177 fix.go:108] recreateIfNeeded on default-k8s-different-port-20211231103230-6736: state=Stopped err=<nil>
	W1231 10:46:06.348985  259177 fix.go:134] unexpected machine state, will restart: <nil>
	I1231 10:46:04.629364  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:06.630805  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:06.352123  259177 out.go:176] * Restarting existing docker container for "default-k8s-different-port-20211231103230-6736" ...
	I1231 10:46:06.352210  259177 cli_runner.go:133] Run: docker start default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.813658  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:06.855625  259177 kic.go:420] container "default-k8s-different-port-20211231103230-6736" state is running.
	I1231 10:46:06.856072  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.893499  259177 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/config.json ...
	I1231 10:46:06.893751  259177 machine.go:88] provisioning docker machine ...
	I1231 10:46:06.893775  259177 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:06.893829  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.942482  259177 main.go:130] libmachine: Using SSH client type: native
	I1231 10:46:06.942732  259177 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I1231 10:46:06.942761  259177 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20211231103230-6736 && echo "default-k8s-different-port-20211231103230-6736" | sudo tee /etc/hostname
	I1231 10:46:06.944176  259177 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49220->127.0.0.1:49432: read: connection reset by peer
	I1231 10:46:10.090471  259177 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20211231103230-6736
	
	I1231 10:46:10.090566  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.126309  259177 main.go:130] libmachine: Using SSH client type: native
	I1231 10:46:10.126479  259177 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I1231 10:46:10.126500  259177 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20211231103230-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20211231103230-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20211231103230-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:46:10.265308  259177 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:46:10.265342  259177 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:46:10.265381  259177 ubuntu.go:177] setting up certificates
	I1231 10:46:10.265390  259177 provision.go:83] configureAuth start
	I1231 10:46:10.265438  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.304699  259177 provision.go:138] copyHostCerts
	I1231 10:46:10.304774  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:46:10.304797  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:46:10.304931  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:46:10.305111  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:46:10.305135  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:46:10.305188  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:46:10.305275  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:46:10.305288  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:46:10.305319  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:46:10.305369  259177 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20211231103230-6736 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20211231103230-6736]
	I1231 10:46:10.388915  259177 provision.go:172] copyRemoteCerts
	I1231 10:46:10.388990  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:46:10.389022  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.428508  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.524630  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:46:10.546323  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I1231 10:46:10.568067  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1231 10:46:10.589042  259177 provision.go:86] duration metric: configureAuth took 323.641763ms
	I1231 10:46:10.589074  259177 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:46:10.589309  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:10.589325  259177 machine.go:91] provisioned docker machine in 3.69555799s
	I1231 10:46:10.589336  259177 start.go:267] post-start starting for "default-k8s-different-port-20211231103230-6736" (driver="docker")
	I1231 10:46:10.589345  259177 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:46:10.589389  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:46:10.589435  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.626814  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.729130  259177 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:46:10.732760  259177 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:46:10.732791  259177 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:46:10.732799  259177 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:46:10.732804  259177 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:46:10.732813  259177 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:46:10.732873  259177 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:46:10.732955  259177 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:46:10.733065  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:46:10.741203  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:46:10.767998  259177 start.go:270] post-start completed in 178.644414ms
	I1231 10:46:10.768098  259177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:46:10.768143  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.814628  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:09.129448  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:11.130044  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:10.913687  259177 fix.go:57] fixHost completed within 4.601551226s
	I1231 10:46:10.913730  259177 start.go:80] releasing machines lock for "default-k8s-different-port-20211231103230-6736", held for 4.601625482s
	I1231 10:46:10.913830  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.952465  259177 ssh_runner.go:195] Run: systemctl --version
	I1231 10:46:10.952474  259177 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:46:10.952512  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.952544  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.995326  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.996790  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:11.093213  259177 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:46:11.117553  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:46:11.129631  259177 docker.go:158] disabling docker service ...
	I1231 10:46:11.129684  259177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:46:11.141035  259177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:46:11.151527  259177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:46:11.242410  259177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:46:11.332381  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:46:11.343726  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:46:11.360330  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:46:11.377258  259177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:46:11.386307  259177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:46:11.395035  259177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:46:11.484118  259177 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:46:11.570327  259177 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:46:11.570400  259177 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:46:11.575057  259177 start.go:458] Will wait 60s for crictl version
	I1231 10:46:11.575130  259177 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:46:11.606505  259177 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:46:11Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:46:13.130219  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:15.629571  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.654252  259177 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:46:22.679099  259177 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:46:22.679173  259177 ssh_runner.go:195] Run: containerd --version
	I1231 10:46:22.699642  259177 ssh_runner.go:195] Run: containerd --version
	I1231 10:46:22.724111  259177 out.go:176] * Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	I1231 10:46:22.724192  259177 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20211231103230-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:46:22.761739  259177 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1231 10:46:22.767723  259177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:46:18.129788  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:20.629221  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.629685  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.782449  259177 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:46:22.785523  259177 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:46:22.788150  259177 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:46:22.788429  259177 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:46:22.788579  259177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:46:22.817871  259177 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:46:22.817900  259177 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:46:22.817954  259177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:46:22.845121  259177 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:46:22.845143  259177 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:46:22.845184  259177 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:46:22.872122  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:22.872151  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:22.872177  259177 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:46:22.872197  259177 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.23.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20211231103230-6736 NodeName:default-k8s-different-port-20211231103230-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:46:22.872494  259177 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20211231103230-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:46:22.872594  259177 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=default-k8s-different-port-20211231103230-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1231 10:46:22.872650  259177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.1
	I1231 10:46:22.880952  259177 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:46:22.881023  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:46:22.889603  259177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (653 bytes)
	I1231 10:46:22.904645  259177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1231 10:46:22.919901  259177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1231 10:46:22.934205  259177 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:46:22.937824  259177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:46:22.948914  259177 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736 for IP: 192.168.67.2
	I1231 10:46:22.949067  259177 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:46:22.949116  259177 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:46:22.949182  259177 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/client.key
	I1231 10:46:22.949259  259177 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key.c7fa3a9e
	I1231 10:46:22.949302  259177 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.key
	I1231 10:46:22.949461  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:46:22.949511  259177 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:46:22.949519  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:46:22.949548  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:46:22.949577  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:46:22.949600  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:46:22.949638  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:46:22.950592  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:46:22.971400  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1231 10:46:22.991752  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:46:23.013577  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1231 10:46:23.034846  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:46:23.055398  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:46:23.077052  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:46:23.096743  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:46:23.116982  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:46:23.138029  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:46:23.158888  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:46:23.180681  259177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:46:23.196969  259177 ssh_runner.go:195] Run: openssl version
	I1231 10:46:23.202414  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:46:23.211307  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.215577  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.215648  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.221379  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:46:23.230004  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:46:23.240407  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.245087  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.245149  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.252376  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:46:23.261273  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:46:23.272187  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.276381  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.276439  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.282440  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:46:23.291931  259177 kubeadm.go:388] StartCluster: {Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true ex
tra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:23.292071  259177 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:46:23.292143  259177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:46:23.322595  259177 cri.go:87] found id: "2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29"
	I1231 10:46:23.322627  259177 cri.go:87] found id: "bd73d75d2e911560016e85d3108966fc37ee8b4b3657740aad59a2ef691ec869"
	I1231 10:46:23.322635  259177 cri.go:87] found id: "6f1fab877ff5d05fc062729318a8cb0acd49484779378aa54df1414cc54b604e"
	I1231 10:46:23.322641  259177 cri.go:87] found id: "631b3be24dd2b7667fa37162bef86df2c43e5cf7b5aac17096ee8f79bf749549"
	I1231 10:46:23.322646  259177 cri.go:87] found id: "a578cbf12a8e43444eec092845ea14c9a2f682965d287a9801a49dd0a7a8f11f"
	I1231 10:46:23.322652  259177 cri.go:87] found id: "78f1ab230e90175601afd355c7e642dc2f7730a2c82d7591eb54c79c0cd8fdcb"
	I1231 10:46:23.322658  259177 cri.go:87] found id: ""
	I1231 10:46:23.322704  259177 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1231 10:46:23.340383  259177 cri.go:114] JSON = null
	W1231 10:46:23.340448  259177 kubeadm.go:395] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I1231 10:46:23.340504  259177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:46:23.349236  259177 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1231 10:46:23.356926  259177 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.357908  259177 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20211231103230-6736" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:23.358387  259177 kubeconfig.go:127] "default-k8s-different-port-20211231103230-6736" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig - will repair!
	I1231 10:46:23.359199  259177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:23.361565  259177 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1231 10:46:23.369806  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.369877  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.387797  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.588254  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.588346  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.603506  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.788716  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.788802  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.805281  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.988608  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.988691  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.003485  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.188825  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.188911  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.204147  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.388346  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.388445  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.405511  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.587900  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.587979  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.603393  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.788613  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.788703  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.804658  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.988878  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.988986  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.004053  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.188400  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.188499  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.203668  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.388884  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.388958  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.404633  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.588928  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.588999  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.603675  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.787949  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.788030  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.803421  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.630080  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:27.131198  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:25.988170  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.988259  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.003489  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.188799  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.188874  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.206241  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.388554  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.388631  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.404946  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.404976  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.405015  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.421640  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W1231 10:46:26.421666  259177 kubeadm.go:600] needs reconfigure: apiserver error: timed out waiting for the condition
	I1231 10:46:26.421696  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1231 10:46:27.164679  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:46:27.176052  259177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:46:27.184966  259177 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:46:27.185023  259177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:46:27.192809  259177 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:46:27.192859  259177 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:46:29.629674  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:31.629826  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:33.630236  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:35.630628  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:41.573231  259177 out.go:203]   - Generating certificates and keys ...
	I1231 10:46:41.577004  259177 out.go:203]   - Booting up control plane ...
	I1231 10:46:41.580458  259177 out.go:203]   - Configuring RBAC rules ...
	I1231 10:46:41.582339  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:41.582359  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:38.131557  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:40.628931  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:42.629291  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:41.584853  259177 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:46:41.584972  259177 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:46:41.589132  259177 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.1/kubectl ...
	I1231 10:46:41.589160  259177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:46:41.603970  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:46:42.303042  259177 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:46:42.303113  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.303143  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=default-k8s-different-port-20211231103230-6736 minikube.k8s.io/updated_at=2021_12_31T10_46_42_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.323470  259177 ops.go:34] apiserver oom_adj: -16
	I1231 10:46:42.425794  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.997478  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:43.497994  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:43.997336  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.497461  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.997469  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:45.497141  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.629588  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:46.630233  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:45.996911  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:46.497554  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:46.996878  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:47.497472  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:47.997537  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:48.496954  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:48.997559  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.497163  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.997251  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:50.496922  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.129073  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:51.129976  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:50.997663  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:51.497167  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:51.996864  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:52.497527  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:52.997090  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:53.497799  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:53.997130  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:54.105111  259177 kubeadm.go:864] duration metric: took 11.802054034s to wait for elevateKubeSystemPrivileges.
	I1231 10:46:54.105148  259177 kubeadm.go:390] StartCluster complete in 30.81326231s
	I1231 10:46:54.105171  259177 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:54.105289  259177 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:54.108579  259177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:54.630591  259177 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20211231103230-6736" rescaled to 1
	I1231 10:46:54.630781  259177 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:46:54.634169  259177 out.go:176] * Verifying Kubernetes components...
	I1231 10:46:54.631217  259177 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I1231 10:46:54.634291  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:46:54.634321  259177 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634357  259177 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634365  259177 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:46:54.634362  259177 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.631547  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:54.631838  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:46:54.634371  259177 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634395  259177 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634410  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634414  259177 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634427  259177 addons.go:165] addon metrics-server should already be in state true
	I1231 10:46:54.634489  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634516  259177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634487  259177 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634558  259177 addons.go:165] addon dashboard should already be in state true
	I1231 10:46:54.634593  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634970  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635065  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635066  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635106  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.685830  259177 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20211231103230-6736" to be "Ready" ...
	I1231 10:46:54.699606  259177 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.699638  259177 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:46:54.699668  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.700169  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.704286  259177 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:46:54.704439  259177 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:46:54.707989  259177 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1231 10:46:54.704464  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:46:54.708151  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.708200  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1231 10:46:54.708461  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1231 10:46:54.708547  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.719755  259177 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1231 10:46:54.722183  259177 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1231 10:46:54.722299  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1231 10:46:54.722320  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1231 10:46:54.722396  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.765928  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.766391  259177 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:46:54.766411  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:46:54.766457  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.767368  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.784353  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.795209  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:46:54.825887  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:55.079913  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1231 10:46:55.079940  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1231 10:46:55.081041  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:46:55.101407  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:46:55.101636  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1231 10:46:55.101669  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1231 10:46:55.182759  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1231 10:46:55.182791  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1231 10:46:55.192507  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1231 10:46:55.192539  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1231 10:46:55.210370  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:46:55.210394  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1231 10:46:55.280215  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1231 10:46:55.280281  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1231 10:46:55.292923  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:46:55.303427  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1231 10:46:55.303453  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1231 10:46:55.390564  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1231 10:46:55.390603  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1231 10:46:55.488578  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1231 10:46:55.488607  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1231 10:46:55.493706  259177 start.go:773] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I1231 10:46:55.509144  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1231 10:46:55.509179  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1231 10:46:55.591504  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1231 10:46:55.591541  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1231 10:46:55.611957  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:46:55.611987  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1231 10:46:55.692647  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:46:56.280527  259177 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:56.694434  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:46:57.215361  259177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.522655734s)
	I1231 10:46:53.130645  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:55.628936  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:57.630008  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:57.218195  259177 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1231 10:46:57.218237  259177 addons.go:417] enableAddons completed in 2.587057511s
	I1231 10:46:59.194381  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:00.129798  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:02.629613  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:01.195147  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:03.195279  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:05.694625  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:05.129931  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:07.629077  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:08.193944  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:10.195307  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:09.629227  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:12.129630  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:12.695675  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:15.195343  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:14.629484  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:17.129904  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:17.693570  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:19.693815  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:19.629766  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:21.630502  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:21.694204  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:23.694541  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:25.694753  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:24.130302  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:26.629329  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:28.193820  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:30.694197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:28.630092  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:30.630190  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:32.694753  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:35.194657  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:33.129099  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:35.129944  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:37.629071  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:37.694883  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:40.194845  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:39.629460  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:41.629639  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:42.694778  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:45.194603  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:44.129618  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:46.628987  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:47.693566  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:49.694322  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:48.629810  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:51.129720  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:52.194896  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:54.694088  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:53.629970  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:55.630008  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:56.694160  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:58.694485  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:00.695020  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:58.129386  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:00.629184  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:02.630921  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:03.194853  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:05.195360  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:05.129367  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:07.629362  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:07.197249  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:09.693805  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:09.629727  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:11.132106  253675 node_ready.go:38] duration metric: took 4m0.023004433s waiting for node "embed-certs-20211231102953-6736" to be "Ready" ...
	I1231 10:48:11.135160  253675 out.go:176] 
	W1231 10:48:11.135314  253675 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:48:11.135327  253675 out.go:241] * 
	W1231 10:48:11.136204  253675 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1231 10:48:11.694445  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:13.694502  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:15.695650  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:18.193879  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:20.194982  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:22.694025  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:24.695099  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:26.695542  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:29.193781  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:31.196683  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:33.693756  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:35.694396  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:37.694612  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:40.194187  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:42.194624  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:44.694809  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:47.194396  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:49.194998  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:51.693920  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:54.194059  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:56.194946  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:58.694370  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:01.194666  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:03.693794  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:05.693979  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:07.694832  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:10.193727  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:12.196818  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:14.694785  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:17.194197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:19.195220  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:21.693956  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:24.193839  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:26.194434  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:28.694344  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:31.194654  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:33.694374  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:36.194938  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:38.693771  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:40.694132  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:42.694270  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:44.694407  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:47.194463  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:49.194582  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:51.694082  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:53.694472  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:56.194226  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:58.194871  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:00.694067  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:02.694491  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:04.695370  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:07.193813  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:09.194857  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:11.693897  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:13.694405  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:15.695280  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:18.194197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:20.694632  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:23.193998  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:25.194494  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:27.194919  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:29.693725  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:31.694052  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:34.194397  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:36.695239  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:39.194905  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:41.693646  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:43.694494  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:46.193735  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:48.194245  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:50.694846  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:53.194883  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:54.696282  259177 node_ready.go:38] duration metric: took 4m0.010315548s waiting for node "default-k8s-different-port-20211231103230-6736" to be "Ready" ...
	I1231 10:50:54.698863  259177 out.go:176] 
	W1231 10:50:54.699054  259177 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:50:54.699075  259177 out.go:241] * 
	W1231 10:50:54.699945  259177 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	9329ce01dc0e5       6de166512aa22       About a minute ago   Exited              kindnet-cni               7                   5b9b47832c3c9
	6de5e38677b20       b46c42588d511       13 minutes ago       Running             kube-proxy                0                   e57926d0ab3c5
	48225c99a0965       25f8c7f3da61c       13 minutes ago       Running             etcd                      1                   0eff770ba2d39
	be441fc987f41       b6d7abedde399       13 minutes ago       Running             kube-apiserver            1                   9be9eca9d95fc
	d693c50da8741       f51846a4fd288       13 minutes ago       Running             kube-controller-manager   1                   9c10142d32214
	d05e688d9162e       71d575efe6283       13 minutes ago       Running             kube-scheduler            1                   b0d0ced0300d8
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-12-31 10:46:07 UTC, end at Fri 2021-12-31 10:59:59 UTC. --
	Dec 31 10:50:40 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:50:40.317007489Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:50:41 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:50:41.262130696Z" level=info msg="RemoveContainer for \"e3fed7b69e8787d1bcdb35975c6c7b6ed02981fd38c158bca2baf43c66a02378\""
	Dec 31 10:50:41 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:50:41.269488288Z" level=info msg="RemoveContainer for \"e3fed7b69e8787d1bcdb35975c6c7b6ed02981fd38c158bca2baf43c66a02378\" returns successfully"
	Dec 31 10:53:21 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:53:21.714233171Z" level=info msg="CreateContainer within sandbox \"5b9b47832c3c9608978186904d83608afaba148d142bf3d9b486c8d8966dbbdf\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:6,}"
	Dec 31 10:53:21 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:53:21.741616783Z" level=info msg="CreateContainer within sandbox \"5b9b47832c3c9608978186904d83608afaba148d142bf3d9b486c8d8966dbbdf\" for &ContainerMetadata{Name:kindnet-cni,Attempt:6,} returns container id \"34717ae14d3de8130a37049a5b9b465105ebf94cad4065da69609f00d6b8b5fc\""
	Dec 31 10:53:21 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:53:21.742414868Z" level=info msg="StartContainer for \"34717ae14d3de8130a37049a5b9b465105ebf94cad4065da69609f00d6b8b5fc\""
	Dec 31 10:53:21 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:53:21.986397037Z" level=info msg="StartContainer for \"34717ae14d3de8130a37049a5b9b465105ebf94cad4065da69609f00d6b8b5fc\" returns successfully"
	Dec 31 10:53:32 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:53:32.288769902Z" level=info msg="Finish piping stderr of container \"34717ae14d3de8130a37049a5b9b465105ebf94cad4065da69609f00d6b8b5fc\""
	Dec 31 10:53:32 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:53:32.288788075Z" level=info msg="Finish piping stdout of container \"34717ae14d3de8130a37049a5b9b465105ebf94cad4065da69609f00d6b8b5fc\""
	Dec 31 10:53:32 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:53:32.289570823Z" level=info msg="TaskExit event &TaskExit{ContainerID:34717ae14d3de8130a37049a5b9b465105ebf94cad4065da69609f00d6b8b5fc,ID:34717ae14d3de8130a37049a5b9b465105ebf94cad4065da69609f00d6b8b5fc,Pid:2861,ExitStatus:2,ExitedAt:2021-12-31 10:53:32.289267884 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:53:32 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:53:32.319900036Z" level=info msg="shim disconnected" id=34717ae14d3de8130a37049a5b9b465105ebf94cad4065da69609f00d6b8b5fc
	Dec 31 10:53:32 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:53:32.320000250Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:53:32 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:53:32.593910341Z" level=info msg="RemoveContainer for \"b452495aa7e452cf74da07fd57f9f73f18d096657904345825605e9beb3f73ff\""
	Dec 31 10:53:32 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:53:32.599907499Z" level=info msg="RemoveContainer for \"b452495aa7e452cf74da07fd57f9f73f18d096657904345825605e9beb3f73ff\" returns successfully"
	Dec 31 10:58:44 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:58:44.714323981Z" level=info msg="CreateContainer within sandbox \"5b9b47832c3c9608978186904d83608afaba148d142bf3d9b486c8d8966dbbdf\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:7,}"
	Dec 31 10:58:44 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:58:44.749622770Z" level=info msg="CreateContainer within sandbox \"5b9b47832c3c9608978186904d83608afaba148d142bf3d9b486c8d8966dbbdf\" for &ContainerMetadata{Name:kindnet-cni,Attempt:7,} returns container id \"9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3\""
	Dec 31 10:58:44 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:58:44.750390208Z" level=info msg="StartContainer for \"9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3\""
	Dec 31 10:58:44 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:58:44.903686780Z" level=info msg="StartContainer for \"9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3\" returns successfully"
	Dec 31 10:58:55 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:58:55.290062048Z" level=info msg="Finish piping stderr of container \"9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3\""
	Dec 31 10:58:55 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:58:55.290078473Z" level=info msg="Finish piping stdout of container \"9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3\""
	Dec 31 10:58:55 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:58:55.291015411Z" level=info msg="TaskExit event &TaskExit{ContainerID:9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3,ID:9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3,Pid:2958,ExitStatus:2,ExitedAt:2021-12-31 10:58:55.290734411 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:58:55 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:58:55.322065104Z" level=info msg="shim disconnected" id=9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3
	Dec 31 10:58:55 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:58:55.322190962Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:58:56 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:58:56.193168675Z" level=info msg="RemoveContainer for \"34717ae14d3de8130a37049a5b9b465105ebf94cad4065da69609f00d6b8b5fc\""
	Dec 31 10:58:56 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:58:56.199377975Z" level=info msg="RemoveContainer for \"34717ae14d3de8130a37049a5b9b465105ebf94cad4065da69609f00d6b8b5fc\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20211231103230-6736
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20211231103230-6736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb
	                    minikube.k8s.io/name=default-k8s-different-port-20211231103230-6736
	                    minikube.k8s.io/updated_at=2021_12_31T10_46_42_0700
	                    minikube.k8s.io/version=v1.24.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 Dec 2021 10:46:38 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20211231103230-6736
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 Dec 2021 10:59:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 Dec 2021 10:56:59 +0000   Fri, 31 Dec 2021 10:46:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 Dec 2021 10:56:59 +0000   Fri, 31 Dec 2021 10:46:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 Dec 2021 10:56:59 +0000   Fri, 31 Dec 2021 10:46:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 31 Dec 2021 10:56:59 +0000   Fri, 31 Dec 2021 10:46:35 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    default-k8s-different-port-20211231103230-6736
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	  System UUID:                60ec9bed-9ff2-4db1-b438-2738c19f5f1f
	  Boot ID:                    e6f4bca3-a61e-4e63-a68d-0c73a5c2c999
	  Kernel Version:             5.11.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.12
	  Kubelet Version:            v1.23.1
	  Kube-Proxy Version:         v1.23.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20211231103230-6736                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-5x2g8                                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                 kube-apiserver-default-k8s-different-port-20211231103230-6736             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-default-k8s-different-port-20211231103230-6736    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-8f86l                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-default-k8s-different-port-20211231103230-6736             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 13m                kube-proxy  
	  Normal  NodeHasSufficientMemory  13m (x5 over 13m)  kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x5 over 13m)  kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x5 over 13m)  kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 da bc c6 d3 d8 08 06
	[  +0.905080] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vetha337dd59
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 d5 c5 d8 9b b7 08 06
	[  +0.767220] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethad0df3e8
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 45 74 46 b2 ed 08 06
	[  +0.175235] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethfa30cc3c
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 f5 8d 27 39 52 08 06
	[  +2.267937] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.022406] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.024047] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.951751] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.011878] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023951] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.963808] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.003832] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023918] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	
	* 
	* ==> etcd [48225c99a09655e9f407ab2f3c22787aa4836d3b837422f8080fb6cb20c5e755] <==
	* {"level":"info","ts":"2021-12-31T10:46:34.980Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-12-31T10:46:34.981Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2021-12-31T10:46:34.981Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2021-12-31T10:46:34.981Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-12-31T10:46:34.981Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-12-31T10:46:35.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2021-12-31T10:46:35.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2021-12-31T10:46:35.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2021-12-31T10:46:35.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2021-12-31T10:46:35.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2021-12-31T10:46:35.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2021-12-31T10:46:35.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2021-12-31T10:46:35.700Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:46:35.700Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:default-k8s-different-port-20211231103230-6736 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2021-12-31T10:46:35.700Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:46:35.700Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:46:35.700Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-12-31T10:46:35.700Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-12-31T10:46:35.701Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:46:35.701Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:46:35.701Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:46:35.701Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-12-31T10:46:35.702Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2021-12-31T10:56:35.719Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":705}
	{"level":"info","ts":"2021-12-31T10:56:35.720Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":705,"took":"814.329µs"}
	
	* 
	* ==> kernel <==
	*  10:59:59 up  1:42,  0 users,  load average: 0.37, 0.44, 0.97
	Linux default-k8s-different-port-20211231103230-6736 5.11.0-1023-gcp #25~20.04.1-Ubuntu SMP Mon Nov 15 15:54:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [be441fc987f419ba5ae606c3dc7e79411ef7643081ffec1e0932415a1faec812] <==
	* I1231 10:49:56.988723       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 10:51:38.988192       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 10:51:38.988299       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:51:38.988308       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 10:52:38.988612       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 10:52:38.988689       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:52:38.988696       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 10:54:38.989684       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 10:54:38.989809       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:54:38.989831       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 10:56:38.994042       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 10:56:38.994140       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:56:38.994152       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 10:57:38.994365       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 10:57:38.994429       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:57:38.994437       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 10:59:38.994992       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 10:59:38.995135       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:59:38.995145       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [d693c50da8741c5da85b83ff32c1c5fd9e21b99ab0afa5b593e83551ee247dd9] <==
	* W1231 10:53:53.822681       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:54:23.399451       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:54:23.840674       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:54:53.413196       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:54:53.858325       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:55:23.425714       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:55:23.876744       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:55:53.436733       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:55:53.892334       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:56:23.446373       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:56:23.907869       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:56:53.458938       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:56:53.928944       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:57:23.479629       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:57:23.946314       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:57:53.505441       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:57:53.965020       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:58:23.526957       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:58:23.983308       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:58:53.550759       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:58:54.002922       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:59:23.576651       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:59:24.023370       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:59:53.604046       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:59:54.040781       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [6de5e38677b20bde5bb55f835d713191bc7054d75f34642dd9f38b9df161d628] <==
	* I1231 10:46:54.503302       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I1231 10:46:54.503392       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I1231 10:46:54.503442       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1231 10:46:54.603822       1 server_others.go:206] "Using iptables Proxier"
	I1231 10:46:54.603869       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1231 10:46:54.603877       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1231 10:46:54.603895       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1231 10:46:54.604363       1 server.go:656] "Version info" version="v1.23.1"
	I1231 10:46:54.604971       1 config.go:226] "Starting endpoint slice config controller"
	I1231 10:46:54.604993       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1231 10:46:54.605095       1 config.go:317] "Starting service config controller"
	I1231 10:46:54.605109       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1231 10:46:54.705619       1 shared_informer.go:247] Caches are synced for service config 
	I1231 10:46:54.705653       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [d05e688d9162e6301a11f7be3af706989ab36e552895ab4ab8adeecc620fc5d7] <==
	* W1231 10:46:38.096166       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1231 10:46:38.096271       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1231 10:46:38.096104       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:46:38.096287       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1231 10:46:38.095998       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:46:38.096303       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1231 10:46:38.950364       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1231 10:46:38.950693       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1231 10:46:38.964806       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1231 10:46:38.964848       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1231 10:46:39.082347       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1231 10:46:39.082428       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1231 10:46:39.127797       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1231 10:46:39.127849       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1231 10:46:39.179308       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:46:39.179367       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1231 10:46:39.201670       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1231 10:46:39.201705       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1231 10:46:39.233921       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:46:39.233975       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1231 10:46:39.285801       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1231 10:46:39.285843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1231 10:46:39.306180       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:46:39.306224       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1231 10:46:41.082427       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-12-31 10:46:07 UTC, end at Fri 2021-12-31 10:59:59 UTC. --
	Dec 31 10:58:52 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:58:52.051754    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:58:56 default-k8s-different-port-20211231103230-6736 kubelet[1372]: I1231 10:58:56.191854    1372 scope.go:110] "RemoveContainer" containerID="34717ae14d3de8130a37049a5b9b465105ebf94cad4065da69609f00d6b8b5fc"
	Dec 31 10:58:56 default-k8s-different-port-20211231103230-6736 kubelet[1372]: I1231 10:58:56.192219    1372 scope.go:110] "RemoveContainer" containerID="9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3"
	Dec 31 10:58:56 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:58:56.192525    1372 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-5x2g8_kube-system(48d54ddb-3cf9-429c-9cc4-7ee13b833262)\"" pod="kube-system/kindnet-5x2g8" podUID=48d54ddb-3cf9-429c-9cc4-7ee13b833262
	Dec 31 10:58:57 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:58:57.052727    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:59:02 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:59:02.053581    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:59:07 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:59:07.055486    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:59:07 default-k8s-different-port-20211231103230-6736 kubelet[1372]: I1231 10:59:07.710829    1372 scope.go:110] "RemoveContainer" containerID="9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3"
	Dec 31 10:59:07 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:59:07.711238    1372 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-5x2g8_kube-system(48d54ddb-3cf9-429c-9cc4-7ee13b833262)\"" pod="kube-system/kindnet-5x2g8" podUID=48d54ddb-3cf9-429c-9cc4-7ee13b833262
	Dec 31 10:59:12 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:59:12.056717    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:59:17 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:59:17.057799    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:59:22 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:59:22.058989    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:59:22 default-k8s-different-port-20211231103230-6736 kubelet[1372]: I1231 10:59:22.711185    1372 scope.go:110] "RemoveContainer" containerID="9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3"
	Dec 31 10:59:22 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:59:22.711460    1372 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-5x2g8_kube-system(48d54ddb-3cf9-429c-9cc4-7ee13b833262)\"" pod="kube-system/kindnet-5x2g8" podUID=48d54ddb-3cf9-429c-9cc4-7ee13b833262
	Dec 31 10:59:27 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:59:27.060084    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:59:32 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:59:32.061282    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:59:35 default-k8s-different-port-20211231103230-6736 kubelet[1372]: I1231 10:59:35.711433    1372 scope.go:110] "RemoveContainer" containerID="9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3"
	Dec 31 10:59:35 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:59:35.711769    1372 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-5x2g8_kube-system(48d54ddb-3cf9-429c-9cc4-7ee13b833262)\"" pod="kube-system/kindnet-5x2g8" podUID=48d54ddb-3cf9-429c-9cc4-7ee13b833262
	Dec 31 10:59:37 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:59:37.062813    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:59:42 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:59:42.064198    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:59:47 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:59:47.065572    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:59:48 default-k8s-different-port-20211231103230-6736 kubelet[1372]: I1231 10:59:48.711302    1372 scope.go:110] "RemoveContainer" containerID="9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3"
	Dec 31 10:59:48 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:59:48.712442    1372 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-5x2g8_kube-system(48d54ddb-3cf9-429c-9cc4-7ee13b833262)\"" pod="kube-system/kindnet-5x2g8" podUID=48d54ddb-3cf9-429c-9cc4-7ee13b833262
	Dec 31 10:59:52 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:59:52.067144    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 10:59:57 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 10:59:57.068631    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20211231103230-6736 -n default-k8s-different-port-20211231103230-6736
helpers_test.go:262: (dbg) Run:  kubectl --context default-k8s-different-port-20211231103230-6736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-64897985d-xx94d metrics-server-7f49dcbd7-gj6bf storage-provisioner dashboard-metrics-scraper-56974995fc-b7pvv kubernetes-dashboard-ccd587f44-gqg75
helpers_test.go:273: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20211231103230-6736 describe pod coredns-64897985d-xx94d metrics-server-7f49dcbd7-gj6bf storage-provisioner dashboard-metrics-scraper-56974995fc-b7pvv kubernetes-dashboard-ccd587f44-gqg75
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20211231103230-6736 describe pod coredns-64897985d-xx94d metrics-server-7f49dcbd7-gj6bf storage-provisioner dashboard-metrics-scraper-56974995fc-b7pvv kubernetes-dashboard-ccd587f44-gqg75: exit status 1 (81.018658ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-xx94d" not found
	Error from server (NotFound): pods "metrics-server-7f49dcbd7-gj6bf" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-b7pvv" not found
	Error from server (NotFound): pods "kubernetes-dashboard-ccd587f44-gqg75" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context default-k8s-different-port-20211231103230-6736 describe pod coredns-64897985d-xx94d metrics-server-7f49dcbd7-gj6bf storage-provisioner dashboard-metrics-scraper-56974995fc-b7pvv kubernetes-dashboard-ccd587f44-gqg75: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (543.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-766959b846-br66c" [192fe7d8-9684-482f-9c73-819e68be9963] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
E1231 10:53:29.444537    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
E1231 10:54:41.557814    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
E1231 10:55:10.310717    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
E1231 10:55:22.104775    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory
E1231 10:55:36.756067    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory
E1231 10:55:43.945537    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 10:56:04.608169    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 10:56:13.450388    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 10:56:59.313515    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 10:57:39.268932    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 10:58:25.150574    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 10:58:29.444322    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 10:58:39.801846    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 10:59:41.557307    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20211231102602-6736 -n old-k8s-version-20211231102602-6736

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2021-12-31 11:02:23.958579146 +0000 UTC m=+4842.940395076
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-20211231102602-6736 describe po kubernetes-dashboard-766959b846-br66c -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context old-k8s-version-20211231102602-6736 describe po kubernetes-dashboard-766959b846-br66c -n kubernetes-dashboard: context deadline exceeded (2.047µs)
start_stop_delete_test.go:272: kubectl --context old-k8s-version-20211231102602-6736 describe po kubernetes-dashboard-766959b846-br66c -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-20211231102602-6736 logs kubernetes-dashboard-766959b846-br66c -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context old-k8s-version-20211231102602-6736 logs kubernetes-dashboard-766959b846-br66c -n kubernetes-dashboard: context deadline exceeded (1.42µs)
start_stop_delete_test.go:272: kubectl --context old-k8s-version-20211231102602-6736 logs kubernetes-dashboard-766959b846-br66c -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20211231102602-6736 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20211231102602-6736 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (646ns)
start_stop_delete_test.go:278: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-20211231102602-6736 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:282: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20211231102602-6736
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20211231102602-6736:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736",
	        "Created": "2021-12-31T10:26:13.51267746Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 248655,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-12-31T10:39:29.118004867Z",
	            "FinishedAt": "2021-12-31T10:39:27.710647386Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/hostname",
	        "HostsPath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/hosts",
	        "LogPath": "/var/lib/docker/containers/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736/5984218b7d4823e8498624f8b1e60a9b4db566cfd22d7fd9bc67a1d14192d736-json.log",
	        "Name": "/old-k8s-version-20211231102602-6736",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20211231102602-6736:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20211231102602-6736",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd-init/diff:/var/lib/docker/overlay2/c534a67c1f4c5b9f1d9e8fda7874029a0b48633454e02a0b6c810aa9cf70f36b/diff:/var/lib/docker/overlay2/de7a9fc119b7ade7c9f3b24bea5108d56a2268af67d230db0cccbbf8190cc502/diff:/var/lib/docker/overlay2/7dde3b10db81d42c3216201bd8676123e0e230bfa173b31e053af8fa206a800e/diff:/var/lib/docker/overlay2/97454ec819da74666dcc72f8e5b4653886c44f3699cecf13a018ecf95d935273/diff:/var/lib/docker/overlay2/f63f64cf22a9591cbcafeb284a265882ea890544f2b083b747047bcf47a45f11/diff:/var/lib/docker/overlay2/85a2123491b64d14f7493b3bc6a519d9df28cc3680976b9e5a7ecf783be5a304/diff:/var/lib/docker/overlay2/2ec15009b968007c3be12325edca4c12821b52fc616ace66d456038c7f9b8b26/diff:/var/lib/docker/overlay2/c45e9285ae7064da0be5f07d29b035ad54f64e529db00ffe6f2dce6ad543dccf/diff:/var/lib/docker/overlay2/d95000893ca52351a7804b110df43c1656b4bc7fbe4940c9e0ff2744233b1f93/diff:/var/lib/docker/overlay2/38e3bf
07bc49f88e1e7214644d31095a6178f5156c6a5ff3fbcaf8676336a52d/diff:/var/lib/docker/overlay2/fe45a9a5036ea98542c5fd7808e66f8b6ffe9d0c545de700f7cbe39d6280e0e3/diff:/var/lib/docker/overlay2/8cd90b043b30c66cbeacbe4ca21881d85717c6e16f1fe627e2eecdeaf461da6c/diff:/var/lib/docker/overlay2/7221b66a068ae65d9a1c089fe5c60e7230aa950b3d47eb7e90a1776c2a45931b/diff:/var/lib/docker/overlay2/86ce10ecfb90cfd8e1f7d722715353b823ddfa58b93eefa2c1efffff899916ee/diff:/var/lib/docker/overlay2/12ea1316fade88c6e1d2272d6dbeb629875ff96f4dd6f6ead0ff6a4e7e3a3067/diff:/var/lib/docker/overlay2/0fe276873166ac11302901f1bb2c547c71e23fac2197e717e789b6ce7d1eb3ce/diff:/var/lib/docker/overlay2/81bd863acb94ff576a1f1f046ee87693930a12209e915a09f2a720c77d1357f3/diff:/var/lib/docker/overlay2/8875d87537d51e137e2fbff41fe61dadeefa46057e0c93d257ec4754b5ba95f3/diff:/var/lib/docker/overlay2/86e75ae4c708ad2e78d21326ca3b7f552b7903d8c7a0a66e086feaacd3a8c002/diff:/var/lib/docker/overlay2/c688ba4459332409ab260296b96d566d186d307b81308fb3b59724fe6fed3c7d/diff:/var/lib/d
ocker/overlay2/45c4b946a887c9de4f592591b3e65041083baba75d5bed9b0995339de1cafd02/diff:/var/lib/docker/overlay2/3927af3e4dc5e4af94ac76cbe5a3b9759b80e10646aabc655f5c9b56ae13e650/diff:/var/lib/docker/overlay2/11dd3efafe062edcbd4dd837545d570514492786b08abf77f57fe300d225eaa5/diff:/var/lib/docker/overlay2/b8e0f4d5369eb0c633092d9e21501d99ccb27fafaad6667209e1d988c337ae9b/diff:/var/lib/docker/overlay2/c605476b22b48b266163ca6ee5fcb52b15458f1d7a77458390109b6e795f72f3/diff:/var/lib/docker/overlay2/19b31873fbc8858f6b143ea8dc98572f32110c70c29732285bdea89dd6163287/diff:/var/lib/docker/overlay2/171afcaaccc199dc3a653d1bf42453959e7c6bd7bc1e8340f94454b5f5fb6d2d/diff:/var/lib/docker/overlay2/2000b0fa8d671b6e34ec28228440eb7e6a5ddcd6b2cb657836d8522e3fb4454e/diff:/var/lib/docker/overlay2/1294bf56384681b45b611fe07d7b32c6c87a67eee568452d1f8123bf319f7a4c/diff:/var/lib/docker/overlay2/3199bb762e90968c1998250099f781c0e8954f4d62df39a80a384b9208149421/diff:/var/lib/docker/overlay2/036a54cd701c84ee783285b30f6cf33b92858f1a11ebd5db15cffffea12
adcd9/diff:/var/lib/docker/overlay2/552a166f9d1f0d79a7c45c08db35c6db9366e5d8e1321b62ffc3d8a9ea458797/diff:/var/lib/docker/overlay2/98f34e7af675026bd6dd9aaf43b996b5a992e04fe33f238d16d38e7ccad2fb91/diff:/var/lib/docker/overlay2/c1ed97ca4514bd37713c82daa6ba805285f12b5aa51599e9f915b8755fbf55a7/diff:/var/lib/docker/overlay2/86b8ba00550442fc509be298ff07205e99a171d56a70a2e1735cdfd4a9ab745e/diff:/var/lib/docker/overlay2/109092c1424dfee38a4eb63e39a9bb5d3de4b011f84e30bd021bf73ee2f47aa1/diff:/var/lib/docker/overlay2/87fd60ac9e5dd5a701def2c0a9e1c64463bd4779be06a990c86e5561a4349312/diff:/var/lib/docker/overlay2/145545da7fd578da8a41adec4cec3a9b347b4e609c900ee9365ae7735c7859c6/diff:/var/lib/docker/overlay2/e6a20d33de71269404b7fdc1fa6f915dcbc5a337e0625ef4536cb942bb55c19b/diff:/var/lib/docker/overlay2/38829872ccac03d0dae45b93d1f34e61fd179b6fc19f22fd27d4431ee773090f/diff:/var/lib/docker/overlay2/3c7fa7cc3994b36d6b984ebe7c46b535829f02814130486c4ecf5a0976f4f7dc/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a5a37d8cfe70425deaebe38daea17de28a260f6bff38e2d728f0c9e8f15e10cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20211231102602-6736",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20211231102602-6736/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20211231102602-6736",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20211231102602-6736",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20211231102602-6736",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7bcb66e32570c51223584d89c06c38407a807612f74bbcd0645dab033af753ae",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49422"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49421"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49418"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49419"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7bcb66e32570",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20211231102602-6736": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5984218b7d48",
	                        "old-k8s-version-20211231102602-6736"
	                    ],
	                    "NetworkID": "689da033f191c821bd60ad0334b0149b7450bc9a9e69f2e467eaea0327517488",
	                    "EndpointID": "b5fcec0b7d4b06090fe9be385801ede5fd25d0e4d16b5573d54b18438c62a2e6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20211231102602-6736 -n old-k8s-version-20211231102602-6736
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20211231102602-6736 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-20211231102602-6736 logs -n 25: (1.070939565s)
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| unpause | -p                                                | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:54 UTC | Fri, 31 Dec 2021 10:34:55 UTC |
	|         | newest-cni-20211231103230-6736                    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                |         |         |                               |                               |
	| delete  | -p                                                | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:56 UTC | Fri, 31 Dec 2021 10:34:59 UTC |
	|         | newest-cni-20211231103230-6736                    |                                                |         |         |                               |                               |
	| delete  | -p                                                | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:59 UTC | Fri, 31 Dec 2021 10:35:00 UTC |
	|         | newest-cni-20211231103230-6736                    |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:37:37 UTC | Fri, 31 Dec 2021 10:37:38 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:02 UTC | Fri, 31 Dec 2021 10:39:03 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:05 UTC | Fri, 31 Dec 2021 10:39:06 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:06 UTC | Fri, 31 Dec 2021 10:39:07 UTC |
	|         | old-k8s-version-20211231102602-6736               |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:07 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736               |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:28 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736               |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:57 UTC | Fri, 31 Dec 2021 10:42:58 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:59 UTC | Fri, 31 Dec 2021 10:43:00 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:01 UTC | Fri, 31 Dec 2021 10:43:02 UTC |
	|         | embed-certs-20211231102953-6736                   |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:02 UTC | Fri, 31 Dec 2021 10:43:22 UTC |
	|         | embed-certs-20211231102953-6736                   |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:22 UTC | Fri, 31 Dec 2021 10:43:22 UTC |
	|         | embed-certs-20211231102953-6736                   |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:44:18 UTC | Fri, 31 Dec 2021 10:44:19 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:40 UTC | Fri, 31 Dec 2021 10:45:41 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:42 UTC | Fri, 31 Dec 2021 10:45:43 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:44 UTC | Fri, 31 Dec 2021 10:45:45 UTC |
	|         | default-k8s-different-port-20211231103230-6736    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:45 UTC | Fri, 31 Dec 2021 10:46:05 UTC |
	|         | default-k8s-different-port-20211231103230-6736    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:46:05 UTC | Fri, 31 Dec 2021 10:46:05 UTC |
	|         | default-k8s-different-port-20211231103230-6736    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:48:11 UTC | Fri, 31 Dec 2021 10:48:12 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:50:55 UTC | Fri, 31 Dec 2021 10:50:56 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:53:21 UTC | Fri, 31 Dec 2021 10:53:22 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:57:14 UTC | Fri, 31 Dec 2021 10:57:15 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:59:58 UTC | Fri, 31 Dec 2021 10:59:59 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/31 10:46:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.17.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1231 10:46:05.967243  259177 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:46:05.967352  259177 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:46:05.967357  259177 out.go:310] Setting ErrFile to fd 2...
	I1231 10:46:05.967361  259177 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:46:05.967469  259177 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:46:05.967735  259177 out.go:304] Setting JSON to false
	I1231 10:46:05.969104  259177 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5320,"bootTime":1640942245,"procs":269,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:46:05.969197  259177 start.go:122] virtualization: kvm guest
	I1231 10:46:05.973853  259177 out.go:176] * [default-k8s-different-port-20211231103230-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:46:05.974255  259177 notify.go:174] Checking for updates...
	I1231 10:46:05.977868  259177 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:46:05.981223  259177 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:46:05.983862  259177 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:05.986549  259177 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:46:05.989061  259177 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:46:05.990312  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:05.991362  259177 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:46:06.042846  259177 docker.go:132] docker version: linux-20.10.12
	I1231 10:46:06.042944  259177 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:46:06.151292  259177 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:46:06.079978563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:46:06.151415  259177 docker.go:237] overlay module found
	I1231 10:46:06.155844  259177 out.go:176] * Using the docker driver based on existing profile
	I1231 10:46:06.155884  259177 start.go:280] selected driver: docker
	I1231 10:46:06.155890  259177 start.go:795] validating driver "docker" against &{Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6
736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:tru
e default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:06.155998  259177 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:46:06.156009  259177 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:46:06.156018  259177 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:46:06.156049  259177 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:46:06.156072  259177 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:46:06.158635  259177 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:46:06.159406  259177 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:46:06.263549  259177 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:46:06.195249559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	W1231 10:46:06.263707  259177 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:46:06.263738  259177 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:46:06.266310  259177 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:46:06.266476  259177 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1231 10:46:06.266517  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:06.266529  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:06.266555  259177 start_flags.go:298] config:
	{Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] Star
tHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:06.269292  259177 out.go:176] * Starting control plane node default-k8s-different-port-20211231103230-6736 in cluster default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.269341  259177 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:46:06.271330  259177 out.go:176] * Pulling base image ...
	I1231 10:46:06.271382  259177 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:46:06.271427  259177 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4
	I1231 10:46:06.271440  259177 cache.go:57] Caching tarball of preloaded images
	I1231 10:46:06.271488  259177 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:46:06.271687  259177 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:46:06.271703  259177 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.1 on containerd
	I1231 10:46:06.271838  259177 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/config.json ...
	I1231 10:46:06.311882  259177 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:46:06.311924  259177 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:46:06.311934  259177 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:46:06.311964  259177 start.go:313] acquiring machines lock for default-k8s-different-port-20211231103230-6736: {Name:mk03f3b7e941bf8396158e471587c1c8924400ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:46:06.312088  259177 start.go:317] acquired machines lock for "default-k8s-different-port-20211231103230-6736" in 90µs
	I1231 10:46:06.312120  259177 start.go:93] Skipping create...Using existing machine configuration
	I1231 10:46:06.312127  259177 fix.go:55] fixHost starting: 
	I1231 10:46:06.312392  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:06.348944  259177 fix.go:108] recreateIfNeeded on default-k8s-different-port-20211231103230-6736: state=Stopped err=<nil>
	W1231 10:46:06.348985  259177 fix.go:134] unexpected machine state, will restart: <nil>
	I1231 10:46:04.629364  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:06.630805  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:06.352123  259177 out.go:176] * Restarting existing docker container for "default-k8s-different-port-20211231103230-6736" ...
	I1231 10:46:06.352210  259177 cli_runner.go:133] Run: docker start default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.813658  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:06.855625  259177 kic.go:420] container "default-k8s-different-port-20211231103230-6736" state is running.
	I1231 10:46:06.856072  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.893499  259177 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/config.json ...
	I1231 10:46:06.893751  259177 machine.go:88] provisioning docker machine ...
	I1231 10:46:06.893775  259177 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:06.893829  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.942482  259177 main.go:130] libmachine: Using SSH client type: native
	I1231 10:46:06.942732  259177 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I1231 10:46:06.942761  259177 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20211231103230-6736 && echo "default-k8s-different-port-20211231103230-6736" | sudo tee /etc/hostname
	I1231 10:46:06.944176  259177 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49220->127.0.0.1:49432: read: connection reset by peer
	I1231 10:46:10.090471  259177 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20211231103230-6736
	
	I1231 10:46:10.090566  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.126309  259177 main.go:130] libmachine: Using SSH client type: native
	I1231 10:46:10.126479  259177 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I1231 10:46:10.126500  259177 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20211231103230-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20211231103230-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20211231103230-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:46:10.265308  259177 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:46:10.265342  259177 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:46:10.265381  259177 ubuntu.go:177] setting up certificates
	I1231 10:46:10.265390  259177 provision.go:83] configureAuth start
	I1231 10:46:10.265438  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.304699  259177 provision.go:138] copyHostCerts
	I1231 10:46:10.304774  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:46:10.304797  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:46:10.304931  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:46:10.305111  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:46:10.305135  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:46:10.305188  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:46:10.305275  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:46:10.305288  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:46:10.305319  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:46:10.305369  259177 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20211231103230-6736 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20211231103230-6736]
	I1231 10:46:10.388915  259177 provision.go:172] copyRemoteCerts
	I1231 10:46:10.388990  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:46:10.389022  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.428508  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.524630  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:46:10.546323  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I1231 10:46:10.568067  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1231 10:46:10.589042  259177 provision.go:86] duration metric: configureAuth took 323.641763ms
	I1231 10:46:10.589074  259177 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:46:10.589309  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:10.589325  259177 machine.go:91] provisioned docker machine in 3.69555799s
	I1231 10:46:10.589336  259177 start.go:267] post-start starting for "default-k8s-different-port-20211231103230-6736" (driver="docker")
	I1231 10:46:10.589345  259177 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:46:10.589389  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:46:10.589435  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.626814  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.729130  259177 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:46:10.732760  259177 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:46:10.732791  259177 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:46:10.732799  259177 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:46:10.732804  259177 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:46:10.732813  259177 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:46:10.732873  259177 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:46:10.732955  259177 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:46:10.733065  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:46:10.741203  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:46:10.767998  259177 start.go:270] post-start completed in 178.644414ms
	I1231 10:46:10.768098  259177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:46:10.768143  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.814628  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:09.129448  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:11.130044  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:10.913687  259177 fix.go:57] fixHost completed within 4.601551226s
	I1231 10:46:10.913730  259177 start.go:80] releasing machines lock for "default-k8s-different-port-20211231103230-6736", held for 4.601625482s
	I1231 10:46:10.913830  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.952465  259177 ssh_runner.go:195] Run: systemctl --version
	I1231 10:46:10.952474  259177 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:46:10.952512  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.952544  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.995326  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.996790  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:11.093213  259177 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:46:11.117553  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:46:11.129631  259177 docker.go:158] disabling docker service ...
	I1231 10:46:11.129684  259177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:46:11.141035  259177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:46:11.151527  259177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:46:11.242410  259177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:46:11.332381  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:46:11.343726  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:46:11.360330  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:46:11.377258  259177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:46:11.386307  259177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:46:11.395035  259177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:46:11.484118  259177 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:46:11.570327  259177 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:46:11.570400  259177 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:46:11.575057  259177 start.go:458] Will wait 60s for crictl version
	I1231 10:46:11.575130  259177 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:46:11.606505  259177 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:46:11Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:46:13.130219  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:15.629571  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.654252  259177 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:46:22.679099  259177 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:46:22.679173  259177 ssh_runner.go:195] Run: containerd --version
	I1231 10:46:22.699642  259177 ssh_runner.go:195] Run: containerd --version
	I1231 10:46:22.724111  259177 out.go:176] * Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	I1231 10:46:22.724192  259177 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20211231103230-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:46:22.761739  259177 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1231 10:46:22.767723  259177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:46:18.129788  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:20.629221  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.629685  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.782449  259177 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:46:22.785523  259177 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:46:22.788150  259177 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:46:22.788429  259177 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:46:22.788579  259177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:46:22.817871  259177 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:46:22.817900  259177 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:46:22.817954  259177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:46:22.845121  259177 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:46:22.845143  259177 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:46:22.845184  259177 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:46:22.872122  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:22.872151  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:22.872177  259177 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:46:22.872197  259177 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.23.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20211231103230-6736 NodeName:default-k8s-different-port-20211231103230-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:46:22.872494  259177 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20211231103230-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:46:22.872594  259177 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=default-k8s-different-port-20211231103230-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1231 10:46:22.872650  259177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.1
	I1231 10:46:22.880952  259177 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:46:22.881023  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:46:22.889603  259177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (653 bytes)
	I1231 10:46:22.904645  259177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1231 10:46:22.919901  259177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1231 10:46:22.934205  259177 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:46:22.937824  259177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:46:22.948914  259177 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736 for IP: 192.168.67.2
	I1231 10:46:22.949067  259177 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:46:22.949116  259177 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:46:22.949182  259177 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/client.key
	I1231 10:46:22.949259  259177 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key.c7fa3a9e
	I1231 10:46:22.949302  259177 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.key
	I1231 10:46:22.949461  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:46:22.949511  259177 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:46:22.949519  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:46:22.949548  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:46:22.949577  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:46:22.949600  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:46:22.949638  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:46:22.950592  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:46:22.971400  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1231 10:46:22.991752  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:46:23.013577  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1231 10:46:23.034846  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:46:23.055398  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:46:23.077052  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:46:23.096743  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:46:23.116982  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:46:23.138029  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:46:23.158888  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:46:23.180681  259177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:46:23.196969  259177 ssh_runner.go:195] Run: openssl version
	I1231 10:46:23.202414  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:46:23.211307  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.215577  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.215648  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.221379  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:46:23.230004  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:46:23.240407  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.245087  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.245149  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.252376  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:46:23.261273  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:46:23.272187  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.276381  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.276439  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.282440  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:46:23.291931  259177 kubeadm.go:388] StartCluster: {Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true ex
tra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:23.292071  259177 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:46:23.292143  259177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:46:23.322595  259177 cri.go:87] found id: "2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29"
	I1231 10:46:23.322627  259177 cri.go:87] found id: "bd73d75d2e911560016e85d3108966fc37ee8b4b3657740aad59a2ef691ec869"
	I1231 10:46:23.322635  259177 cri.go:87] found id: "6f1fab877ff5d05fc062729318a8cb0acd49484779378aa54df1414cc54b604e"
	I1231 10:46:23.322641  259177 cri.go:87] found id: "631b3be24dd2b7667fa37162bef86df2c43e5cf7b5aac17096ee8f79bf749549"
	I1231 10:46:23.322646  259177 cri.go:87] found id: "a578cbf12a8e43444eec092845ea14c9a2f682965d287a9801a49dd0a7a8f11f"
	I1231 10:46:23.322652  259177 cri.go:87] found id: "78f1ab230e90175601afd355c7e642dc2f7730a2c82d7591eb54c79c0cd8fdcb"
	I1231 10:46:23.322658  259177 cri.go:87] found id: ""
	I1231 10:46:23.322704  259177 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1231 10:46:23.340383  259177 cri.go:114] JSON = null
	W1231 10:46:23.340448  259177 kubeadm.go:395] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I1231 10:46:23.340504  259177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:46:23.349236  259177 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1231 10:46:23.356926  259177 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.357908  259177 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20211231103230-6736" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:23.358387  259177 kubeconfig.go:127] "default-k8s-different-port-20211231103230-6736" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig - will repair!
	I1231 10:46:23.359199  259177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:23.361565  259177 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1231 10:46:23.369806  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.369877  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.387797  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.588254  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.588346  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.603506  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.788716  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.788802  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.805281  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.988608  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.988691  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.003485  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.188825  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.188911  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.204147  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.388346  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.388445  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.405511  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.587900  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.587979  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.603393  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.788613  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.788703  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.804658  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.988878  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.988986  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.004053  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.188400  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.188499  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.203668  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.388884  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.388958  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.404633  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.588928  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.588999  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.603675  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.787949  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.788030  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.803421  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.630080  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:27.131198  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:25.988170  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.988259  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.003489  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.188799  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.188874  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.206241  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.388554  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.388631  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.404946  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.404976  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.405015  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.421640  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W1231 10:46:26.421666  259177 kubeadm.go:600] needs reconfigure: apiserver error: timed out waiting for the condition
	I1231 10:46:26.421696  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1231 10:46:27.164679  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:46:27.176052  259177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:46:27.184966  259177 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:46:27.185023  259177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:46:27.192809  259177 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:46:27.192859  259177 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:46:29.629674  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:31.629826  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:33.630236  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:35.630628  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:41.573231  259177 out.go:203]   - Generating certificates and keys ...
	I1231 10:46:41.577004  259177 out.go:203]   - Booting up control plane ...
	I1231 10:46:41.580458  259177 out.go:203]   - Configuring RBAC rules ...
	I1231 10:46:41.582339  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:41.582359  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:38.131557  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:40.628931  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:42.629291  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:41.584853  259177 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:46:41.584972  259177 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:46:41.589132  259177 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.1/kubectl ...
	I1231 10:46:41.589160  259177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:46:41.603970  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:46:42.303042  259177 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:46:42.303113  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.303143  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=default-k8s-different-port-20211231103230-6736 minikube.k8s.io/updated_at=2021_12_31T10_46_42_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.323470  259177 ops.go:34] apiserver oom_adj: -16
	I1231 10:46:42.425794  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.997478  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:43.497994  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:43.997336  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.497461  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.997469  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:45.497141  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.629588  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:46.630233  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:45.996911  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:46.497554  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:46.996878  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:47.497472  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:47.997537  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:48.496954  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:48.997559  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.497163  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.997251  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:50.496922  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.129073  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:51.129976  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:50.997663  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:51.497167  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:51.996864  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:52.497527  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:52.997090  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:53.497799  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:53.997130  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:54.105111  259177 kubeadm.go:864] duration metric: took 11.802054034s to wait for elevateKubeSystemPrivileges.
	I1231 10:46:54.105148  259177 kubeadm.go:390] StartCluster complete in 30.81326231s
	I1231 10:46:54.105171  259177 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:54.105289  259177 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:54.108579  259177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:54.630591  259177 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20211231103230-6736" rescaled to 1
	I1231 10:46:54.630781  259177 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:46:54.634169  259177 out.go:176] * Verifying Kubernetes components...
	I1231 10:46:54.631217  259177 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I1231 10:46:54.634291  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:46:54.634321  259177 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634357  259177 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634365  259177 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:46:54.634362  259177 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.631547  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:54.631838  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:46:54.634371  259177 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634395  259177 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634410  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634414  259177 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634427  259177 addons.go:165] addon metrics-server should already be in state true
	I1231 10:46:54.634489  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634516  259177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634487  259177 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634558  259177 addons.go:165] addon dashboard should already be in state true
	I1231 10:46:54.634593  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634970  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635065  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635066  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635106  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.685830  259177 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20211231103230-6736" to be "Ready" ...
	I1231 10:46:54.699606  259177 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.699638  259177 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:46:54.699668  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.700169  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.704286  259177 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:46:54.704439  259177 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:46:54.707989  259177 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1231 10:46:54.704464  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:46:54.708151  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.708200  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1231 10:46:54.708461  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1231 10:46:54.708547  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.719755  259177 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1231 10:46:54.722183  259177 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1231 10:46:54.722299  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1231 10:46:54.722320  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1231 10:46:54.722396  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.765928  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.766391  259177 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:46:54.766411  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:46:54.766457  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.767368  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.784353  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.795209  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:46:54.825887  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:55.079913  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1231 10:46:55.079940  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1231 10:46:55.081041  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:46:55.101407  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:46:55.101636  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1231 10:46:55.101669  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1231 10:46:55.182759  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1231 10:46:55.182791  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1231 10:46:55.192507  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1231 10:46:55.192539  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1231 10:46:55.210370  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:46:55.210394  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1231 10:46:55.280215  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1231 10:46:55.280281  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1231 10:46:55.292923  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:46:55.303427  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1231 10:46:55.303453  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1231 10:46:55.390564  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1231 10:46:55.390603  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1231 10:46:55.488578  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1231 10:46:55.488607  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1231 10:46:55.493706  259177 start.go:773] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I1231 10:46:55.509144  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1231 10:46:55.509179  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1231 10:46:55.591504  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1231 10:46:55.591541  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1231 10:46:55.611957  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:46:55.611987  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1231 10:46:55.692647  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:46:56.280527  259177 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:56.694434  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:46:57.215361  259177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.522655734s)
	I1231 10:46:53.130645  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:55.628936  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:57.630008  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:57.218195  259177 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1231 10:46:57.218237  259177 addons.go:417] enableAddons completed in 2.587057511s
	I1231 10:46:59.194381  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:00.129798  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:02.629613  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:01.195147  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:03.195279  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:05.694625  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:05.129931  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:07.629077  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:08.193944  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:10.195307  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:09.629227  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:12.129630  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:12.695675  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:15.195343  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:14.629484  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:17.129904  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:17.693570  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:19.693815  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:19.629766  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:21.630502  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:21.694204  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:23.694541  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:25.694753  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:24.130302  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:26.629329  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:28.193820  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:30.694197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:28.630092  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:30.630190  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:32.694753  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:35.194657  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:33.129099  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:35.129944  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:37.629071  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:37.694883  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:40.194845  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:39.629460  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:41.629639  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:42.694778  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:45.194603  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:44.129618  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:46.628987  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:47.693566  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:49.694322  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:48.629810  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:51.129720  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:52.194896  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:54.694088  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:53.629970  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:55.630008  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:56.694160  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:58.694485  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:00.695020  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:58.129386  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:00.629184  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:02.630921  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:03.194853  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:05.195360  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:05.129367  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:07.629362  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:07.197249  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:09.693805  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:09.629727  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:11.132106  253675 node_ready.go:38] duration metric: took 4m0.023004433s waiting for node "embed-certs-20211231102953-6736" to be "Ready" ...
	I1231 10:48:11.135160  253675 out.go:176] 
	W1231 10:48:11.135314  253675 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:48:11.135327  253675 out.go:241] * 
	W1231 10:48:11.136204  253675 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1231 10:48:11.694445  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:13.694502  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:15.695650  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:18.193879  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:20.194982  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:22.694025  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:24.695099  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:26.695542  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:29.193781  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:31.196683  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:33.693756  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:35.694396  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:37.694612  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:40.194187  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:42.194624  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:44.694809  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:47.194396  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:49.194998  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:51.693920  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:54.194059  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:56.194946  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:58.694370  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:01.194666  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:03.693794  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:05.693979  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:07.694832  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:10.193727  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:12.196818  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:14.694785  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:17.194197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:19.195220  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:21.693956  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:24.193839  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:26.194434  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:28.694344  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:31.194654  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:33.694374  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:36.194938  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:38.693771  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:40.694132  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:42.694270  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:44.694407  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:47.194463  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:49.194582  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:51.694082  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:53.694472  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:56.194226  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:58.194871  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:00.694067  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:02.694491  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:04.695370  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:07.193813  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:09.194857  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:11.693897  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:13.694405  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:15.695280  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:18.194197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:20.694632  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:23.193998  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:25.194494  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:27.194919  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:29.693725  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:31.694052  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:34.194397  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:36.695239  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:39.194905  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:41.693646  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:43.694494  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:46.193735  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:48.194245  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:50.694846  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:53.194883  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:54.696282  259177 node_ready.go:38] duration metric: took 4m0.010315548s waiting for node "default-k8s-different-port-20211231103230-6736" to be "Ready" ...
	I1231 10:50:54.698863  259177 out.go:176] 
	W1231 10:50:54.699054  259177 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:50:54.699075  259177 out.go:241] * 
	W1231 10:50:54.699945  259177 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	e003b9328c8be       6de166512aa22       32 seconds ago      Running             kindnet-cni               6                   8e9c4ebe9af8a
	0ddad1b096016       6de166512aa22       5 minutes ago       Exited              kindnet-cni               5                   8e9c4ebe9af8a
	260191439414c       c21b0c7400f98       22 minutes ago      Running             kube-proxy                0                   27df6e69859e8
	360d04f1d2e49       b305571ca60a5       22 minutes ago      Running             kube-apiserver            0                   9a26f93849781
	e488bccab2c37       06a629a7e51cd       22 minutes ago      Running             kube-controller-manager   0                   3c7deabb07da8
	02f5bc6f1fdd0       b2756210eeabf       22 minutes ago      Running             etcd                      0                   9d962dd90af06
	b939ed1a80a18       301ddc62b80b1       22 minutes ago      Running             kube-scheduler            0                   416af3a4e9b8c
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-12-31 10:39:29 UTC, end at Fri 2021-12-31 11:02:25 UTC. --
	Dec 31 10:52:29 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:52:29.869808674Z" level=info msg="CreateContainer within sandbox \"8e9c4ebe9af8a969d9754b6172d689da6fb7538703edbb7efc950442d5ff54ed\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"93fd192748f7b6ed147b4581f2c8f6d5acc20cea104643a3bde8e596b2fa04a2\""
	Dec 31 10:52:29 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:52:29.870874136Z" level=info msg="StartContainer for \"93fd192748f7b6ed147b4581f2c8f6d5acc20cea104643a3bde8e596b2fa04a2\""
	Dec 31 10:52:30 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:52:30.084954100Z" level=info msg="StartContainer for \"93fd192748f7b6ed147b4581f2c8f6d5acc20cea104643a3bde8e596b2fa04a2\" returns successfully"
	Dec 31 10:55:10 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:55:10.302561011Z" level=info msg="Finish piping stderr of container \"93fd192748f7b6ed147b4581f2c8f6d5acc20cea104643a3bde8e596b2fa04a2\""
	Dec 31 10:55:10 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:55:10.302580875Z" level=info msg="Finish piping stdout of container \"93fd192748f7b6ed147b4581f2c8f6d5acc20cea104643a3bde8e596b2fa04a2\""
	Dec 31 10:55:10 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:55:10.303432845Z" level=info msg="TaskExit event &TaskExit{ContainerID:93fd192748f7b6ed147b4581f2c8f6d5acc20cea104643a3bde8e596b2fa04a2,ID:93fd192748f7b6ed147b4581f2c8f6d5acc20cea104643a3bde8e596b2fa04a2,Pid:4310,ExitStatus:2,ExitedAt:2021-12-31 10:55:10.303131381 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:55:10 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:55:10.328616595Z" level=info msg="shim disconnected" id=93fd192748f7b6ed147b4581f2c8f6d5acc20cea104643a3bde8e596b2fa04a2
	Dec 31 10:55:10 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:55:10.328721883Z" level=error msg="copy shim log" error="read /proc/self/fd/79: file already closed"
	Dec 31 10:55:10 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:55:10.338214044Z" level=info msg="RemoveContainer for \"577abf6458465acf657506041583ac700c5cd3ba5a9ce34f051a11cc9bc11ea1\""
	Dec 31 10:55:10 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:55:10.345063752Z" level=info msg="RemoveContainer for \"577abf6458465acf657506041583ac700c5cd3ba5a9ce34f051a11cc9bc11ea1\" returns successfully"
	Dec 31 10:56:31 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:56:31.847117125Z" level=info msg="CreateContainer within sandbox \"8e9c4ebe9af8a969d9754b6172d689da6fb7538703edbb7efc950442d5ff54ed\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:5,}"
	Dec 31 10:56:31 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:56:31.877134553Z" level=info msg="CreateContainer within sandbox \"8e9c4ebe9af8a969d9754b6172d689da6fb7538703edbb7efc950442d5ff54ed\" for &ContainerMetadata{Name:kindnet-cni,Attempt:5,} returns container id \"0ddad1b09601653589db4a7f85c34a41c4371fd4f4f8f48bd8e930672c0ca801\""
	Dec 31 10:56:31 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:56:31.877885609Z" level=info msg="StartContainer for \"0ddad1b09601653589db4a7f85c34a41c4371fd4f4f8f48bd8e930672c0ca801\""
	Dec 31 10:56:32 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:56:32.086886691Z" level=info msg="StartContainer for \"0ddad1b09601653589db4a7f85c34a41c4371fd4f4f8f48bd8e930672c0ca801\" returns successfully"
	Dec 31 10:59:12 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:59:12.388393693Z" level=info msg="Finish piping stderr of container \"0ddad1b09601653589db4a7f85c34a41c4371fd4f4f8f48bd8e930672c0ca801\""
	Dec 31 10:59:12 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:59:12.388410524Z" level=info msg="Finish piping stdout of container \"0ddad1b09601653589db4a7f85c34a41c4371fd4f4f8f48bd8e930672c0ca801\""
	Dec 31 10:59:12 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:59:12.389144260Z" level=info msg="TaskExit event &TaskExit{ContainerID:0ddad1b09601653589db4a7f85c34a41c4371fd4f4f8f48bd8e930672c0ca801,ID:0ddad1b09601653589db4a7f85c34a41c4371fd4f4f8f48bd8e930672c0ca801,Pid:5135,ExitStatus:2,ExitedAt:2021-12-31 10:59:12.388832895 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:59:12 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:59:12.421708677Z" level=info msg="shim disconnected" id=0ddad1b09601653589db4a7f85c34a41c4371fd4f4f8f48bd8e930672c0ca801
	Dec 31 10:59:12 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:59:12.421849633Z" level=error msg="copy shim log" error="read /proc/self/fd/79: file already closed"
	Dec 31 10:59:12 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:59:12.698346675Z" level=info msg="RemoveContainer for \"93fd192748f7b6ed147b4581f2c8f6d5acc20cea104643a3bde8e596b2fa04a2\""
	Dec 31 10:59:12 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T10:59:12.704937283Z" level=info msg="RemoveContainer for \"93fd192748f7b6ed147b4581f2c8f6d5acc20cea104643a3bde8e596b2fa04a2\" returns successfully"
	Dec 31 11:01:52 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T11:01:52.847069885Z" level=info msg="CreateContainer within sandbox \"8e9c4ebe9af8a969d9754b6172d689da6fb7538703edbb7efc950442d5ff54ed\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:6,}"
	Dec 31 11:01:52 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T11:01:52.871498878Z" level=info msg="CreateContainer within sandbox \"8e9c4ebe9af8a969d9754b6172d689da6fb7538703edbb7efc950442d5ff54ed\" for &ContainerMetadata{Name:kindnet-cni,Attempt:6,} returns container id \"e003b9328c8be0c48c3a0d4bcf43aeb7f0f14590f60baf4a5ea77661660fab7c\""
	Dec 31 11:01:52 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T11:01:52.872264207Z" level=info msg="StartContainer for \"e003b9328c8be0c48c3a0d4bcf43aeb7f0f14590f60baf4a5ea77661660fab7c\""
	Dec 31 11:01:53 old-k8s-version-20211231102602-6736 containerd[343]: time="2021-12-31T11:01:53.085723465Z" level=info msg="StartContainer for \"e003b9328c8be0c48c3a0d4bcf43aeb7f0f14590f60baf4a5ea77661660fab7c\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20211231102602-6736
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20211231102602-6736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb
	                    minikube.k8s.io/name=old-k8s-version-20211231102602-6736
	                    minikube.k8s.io/updated_at=2021_12_31T10_40_01_0700
	                    minikube.k8s.io/version=v1.24.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 Dec 2021 10:39:56 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 Dec 2021 11:01:57 +0000   Fri, 31 Dec 2021 10:39:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 Dec 2021 11:01:57 +0000   Fri, 31 Dec 2021 10:39:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 Dec 2021 11:01:57 +0000   Fri, 31 Dec 2021 10:39:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 31 Dec 2021 11:01:57 +0000   Fri, 31 Dec 2021 10:39:52 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    old-k8s-version-20211231102602-6736
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32879780Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32879780Ki
	 pods:               110
	System Info:
	 Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	 System UUID:                5a8cca94-3bdf-4013-adda-72ef27798431
	 Boot ID:                    e6f4bca3-a61e-4e63-a68d-0c73a5c2c999
	 Kernel Version:             5.11.0-1023-gcp
	 OS Image:                   Ubuntu 20.04.2 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.4.12
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-20211231102602-6736                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                kindnet-wttrw                                                  100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      22m
	  kube-system                kube-apiserver-old-k8s-version-20211231102602-6736             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                kube-controller-manager-old-k8s-version-20211231102602-6736    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                kube-proxy-7nkns                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                kube-scheduler-old-k8s-version-20211231102602-6736             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                             Message
	  ----    ------                   ----               ----                                             -------
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet, old-k8s-version-20211231102602-6736     Node old-k8s-version-20211231102602-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet, old-k8s-version-20211231102602-6736     Node old-k8s-version-20211231102602-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet, old-k8s-version-20211231102602-6736     Node old-k8s-version-20211231102602-6736 status is now: NodeHasSufficientPID
	  Normal  Starting                 22m                kube-proxy, old-k8s-version-20211231102602-6736  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 da bc c6 d3 d8 08 06
	[  +0.905080] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vetha337dd59
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 d5 c5 d8 9b b7 08 06
	[  +0.767220] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethad0df3e8
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 45 74 46 b2 ed 08 06
	[  +0.175235] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethfa30cc3c
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 f5 8d 27 39 52 08 06
	[  +2.267937] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.022406] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.024047] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.951751] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.011878] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023951] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.963808] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.003832] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023918] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	
	* 
	* ==> etcd [02f5bc6f1fdd0081ac22b1216606e6a5da1908f6dd8b37174cb86189c9245c90] <==
	* 2021-12-31 10:39:52.187856 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2021-12-31 10:39:52.188222 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2021-12-31 10:39:52.190334 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-12-31 10:39:52.190683 I | embed: listening for metrics on http://192.168.49.2:2381
	2021-12-31 10:39:52.190768 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-12-31 10:39:53.117162 I | raft: aec36adc501070cc is starting a new election at term 1
	2021-12-31 10:39:53.117224 I | raft: aec36adc501070cc became candidate at term 2
	2021-12-31 10:39:53.117263 I | raft: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	2021-12-31 10:39:53.117287 I | raft: aec36adc501070cc became leader at term 2
	2021-12-31 10:39:53.117298 I | raft: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-12-31 10:39:53.117591 I | etcdserver: published {Name:old-k8s-version-20211231102602-6736 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-12-31 10:39:53.117619 I | embed: ready to serve client requests
	2021-12-31 10:39:53.117650 I | etcdserver: setting up the initial cluster version to 3.3
	2021-12-31 10:39:53.117691 I | embed: ready to serve client requests
	2021-12-31 10:39:53.120604 I | embed: serving client requests on 127.0.0.1:2379
	2021-12-31 10:39:53.121294 N | etcdserver/membership: set the initial cluster version to 3.3
	2021-12-31 10:39:53.121579 I | embed: serving client requests on 192.168.49.2:2379
	2021-12-31 10:39:53.121964 I | etcdserver/api: enabled capabilities for version 3.3
	2021-12-31 10:40:16.985363 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-9zxjv\" " with result "range_response_count:1 size:1435" took too long (179.374899ms) to execute
	2021-12-31 10:49:53.137699 I | mvcc: store.index: compact 565
	2021-12-31 10:49:53.138706 I | mvcc: finished scheduled compaction at 565 (took 609.084µs)
	2021-12-31 10:54:53.143447 I | mvcc: store.index: compact 650
	2021-12-31 10:54:53.144268 I | mvcc: finished scheduled compaction at 650 (took 406.36µs)
	2021-12-31 10:59:53.148072 I | mvcc: store.index: compact 732
	2021-12-31 10:59:53.150448 I | mvcc: finished scheduled compaction at 732 (took 1.981608ms)
	
	* 
	* ==> kernel <==
	*  11:02:25 up  1:44,  0 users,  load average: 0.64, 0.52, 0.92
	Linux old-k8s-version-20211231102602-6736 5.11.0-1023-gcp #25~20.04.1-Ubuntu SMP Mon Nov 15 15:54:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [360d04f1d2e497917d40c239dd1ddc12199edce8119e293dbbf9e16d2ff6195d] <==
	* I1231 10:54:57.219736       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1231 10:54:57.219844       1 handler_proxy.go:99] no RequestInfo found in the context
	E1231 10:54:57.219920       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:54:57.219941       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1231 10:55:57.220189       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1231 10:55:57.220305       1 handler_proxy.go:99] no RequestInfo found in the context
	E1231 10:55:57.220348       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:55:57.220364       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1231 10:57:57.220641       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1231 10:57:57.220742       1 handler_proxy.go:99] no RequestInfo found in the context
	E1231 10:57:57.220812       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:57:57.220829       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1231 10:59:57.223050       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1231 10:59:57.223199       1 handler_proxy.go:99] no RequestInfo found in the context
	E1231 10:59:57.223295       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:59:57.223314       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1231 11:00:57.223591       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1231 11:00:57.223686       1 handler_proxy.go:99] no RequestInfo found in the context
	E1231 11:00:57.223768       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 11:00:57.223788       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [e488bccab2c374294dc4e1182a1f7c01461d03131238f7c50a0b1a3bc38b498d] <==
	* E1231 10:55:55.012474       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:56:17.269769       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:56:25.264095       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:56:49.271937       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:56:55.516427       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:57:21.274177       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:57:25.768336       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:57:53.276289       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:57:56.020472       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:58:25.278259       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:58:26.272176       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1231 10:58:56.524550       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:58:57.280062       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:59:26.776599       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 10:59:29.282637       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 10:59:57.028709       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:00:01.285191       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:00:27.280485       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:00:33.287135       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:00:57.532353       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:01:05.289494       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:01:27.784787       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:01:37.291373       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:01:58.036894       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:02:09.293377       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [260191439414cb571c50079bad300e6fcefc8412455207a3187344dc06e156e8] <==
	* W1231 10:40:17.793443       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1231 10:40:17.813322       1 node.go:135] Successfully retrieved node IP: 192.168.49.2
	I1231 10:40:17.813381       1 server_others.go:149] Using iptables Proxier.
	I1231 10:40:17.814012       1 server.go:529] Version: v1.16.0
	I1231 10:40:17.814913       1 config.go:313] Starting service config controller
	I1231 10:40:17.814953       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1231 10:40:17.819152       1 config.go:131] Starting endpoints config controller
	I1231 10:40:17.819190       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1231 10:40:17.915302       1 shared_informer.go:204] Caches are synced for service config 
	I1231 10:40:17.919428       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [b939ed1a80a1833f964e536cf3c9e9cdc859e60141643e37c72deb76b9c1a7d7] <==
	* E1231 10:39:56.485784       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1231 10:39:56.486236       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:39:56.487207       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1231 10:39:56.487313       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:39:56.487518       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:39:56.488908       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1231 10:39:56.489055       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1231 10:39:56.489075       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1231 10:39:56.490229       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:39:56.490565       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:39:56.491226       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1231 10:39:57.487195       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1231 10:39:57.488364       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:39:57.489386       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1231 10:39:57.490301       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:39:57.491544       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:39:57.492628       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1231 10:39:57.493751       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1231 10:39:57.496400       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1231 10:39:57.497358       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:39:57.499029       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:39:57.499226       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1231 10:40:16.986652       1 factory.go:585] pod is already present in the activeQ
	E1231 10:40:19.001392       1 factory.go:585] pod is already present in the activeQ
	E1231 10:40:19.791910       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-12-31 10:39:29 UTC, end at Fri 2021-12-31 11:02:25 UTC. --
	Dec 31 11:01:27 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:01:27.844640     927 pod_workers.go:191] Error syncing pod c6532ac6-8d82-4c81-b651-adeb7a219e08 ("kindnet-wttrw_kube-system(c6532ac6-8d82-4c81-b651-adeb7a219e08)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kindnet-cni pod=kindnet-wttrw_kube-system(c6532ac6-8d82-4c81-b651-adeb7a219e08)"
	Dec 31 11:01:31 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:01:31.276805     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 11:01:35 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:01:35.343792     927 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 11:01:35 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:01:35.343865     927 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 11:01:36 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:01:36.277988     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 11:01:41 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:01:41.278983     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 11:01:41 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:01:41.844570     927 pod_workers.go:191] Error syncing pod c6532ac6-8d82-4c81-b651-adeb7a219e08 ("kindnet-wttrw_kube-system(c6532ac6-8d82-4c81-b651-adeb7a219e08)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kindnet-cni pod=kindnet-wttrw_kube-system(c6532ac6-8d82-4c81-b651-adeb7a219e08)"
	Dec 31 11:01:45 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:01:45.374547     927 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 11:01:45 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:01:45.374603     927 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 11:01:46 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:01:46.280179     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 11:01:51 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:01:51.281131     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 11:01:55 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:01:55.409383     927 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 11:01:55 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:01:55.409425     927 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 11:01:56 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:01:56.282256     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 11:02:01 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:02:01.283067     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 11:02:05 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:02:05.445512     927 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 11:02:05 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:02:05.445557     927 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 11:02:06 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:02:06.283892     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 11:02:11 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:02:11.284930     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 11:02:15 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:02:15.483200     927 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 11:02:15 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:02:15.483239     927 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	Dec 31 11:02:16 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:02:16.285821     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 11:02:21 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:02:21.286622     927 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Dec 31 11:02:25 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:02:25.517677     927 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods"
	Dec 31 11:02:25 old-k8s-version-20211231102602-6736 kubelet[927]: E1231 11:02:25.518094     927 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20211231102602-6736 -n old-k8s-version-20211231102602-6736

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20211231102602-6736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-5644d7b6d9-9zxjv metrics-server-5b7b789f-vbdjk storage-provisioner dashboard-metrics-scraper-6b84985989-sdtqb kubernetes-dashboard-766959b846-br66c
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20211231102602-6736 describe pod coredns-5644d7b6d9-9zxjv metrics-server-5b7b789f-vbdjk storage-provisioner dashboard-metrics-scraper-6b84985989-sdtqb kubernetes-dashboard-766959b846-br66c
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20211231102602-6736 describe pod coredns-5644d7b6d9-9zxjv metrics-server-5b7b789f-vbdjk storage-provisioner dashboard-metrics-scraper-6b84985989-sdtqb kubernetes-dashboard-766959b846-br66c: exit status 1 (86.258086ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-9zxjv" not found
	Error from server (NotFound): pods "metrics-server-5b7b789f-vbdjk" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6b84985989-sdtqb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-766959b846-br66c" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20211231102602-6736 describe pod coredns-5644d7b6d9-9zxjv metrics-server-5b7b789f-vbdjk storage-provisioner dashboard-metrics-scraper-6b84985989-sdtqb kubernetes-dashboard-766959b846-br66c: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-dwbwm" [5a12cb6a-d97e-4025-b6a0-9421173550dc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:00:02.358515    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:00:10.309748    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:00:22.105260    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:00:36.756018    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:00:43.945229    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:01:13.450608    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:01:33.360446    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:01:59.314053    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:02:39.269599    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:04:02.316336    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:05:10.310255    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:05:22.104587    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:05:36.756277    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:05:43.946077    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:06:01.279850    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.crt: no such file or directory
E1231 11:06:01.285210    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.crt: no such file or directory
E1231 11:06:01.295547    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.crt: no such file or directory
E1231 11:06:01.315864    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.crt: no such file or directory
E1231 11:06:01.356193    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:06:01.918111    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:06:03.839587    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20211231102953-6736 -n embed-certs-20211231102953-6736
start_stop_delete_test.go:272: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2021-12-31 11:06:17.181028994 +0000 UTC m=+5076.162844926
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-20211231102953-6736 describe po kubernetes-dashboard-ccd587f44-dwbwm -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context embed-certs-20211231102953-6736 describe po kubernetes-dashboard-ccd587f44-dwbwm -n kubernetes-dashboard: context deadline exceeded (1.73µs)
start_stop_delete_test.go:272: kubectl --context embed-certs-20211231102953-6736 describe po kubernetes-dashboard-ccd587f44-dwbwm -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-20211231102953-6736 logs kubernetes-dashboard-ccd587f44-dwbwm -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context embed-certs-20211231102953-6736 logs kubernetes-dashboard-ccd587f44-dwbwm -n kubernetes-dashboard: context deadline exceeded (265ns)
start_stop_delete_test.go:272: kubectl --context embed-certs-20211231102953-6736 logs kubernetes-dashboard-ccd587f44-dwbwm -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20211231102953-6736 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20211231102953-6736 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (110ns)
start_stop_delete_test.go:278: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-20211231102953-6736 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:282: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect embed-certs-20211231102953-6736
helpers_test.go:236: (dbg) docker inspect embed-certs-20211231102953-6736:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676",
	        "Created": "2021-12-31T10:30:07.254073431Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 253939,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-12-31T10:43:23.613074153Z",
	            "FinishedAt": "2021-12-31T10:43:22.23405709Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/hostname",
	        "HostsPath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/hosts",
	        "LogPath": "/var/lib/docker/containers/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676/de3bee7bab0c6c3b47c8628e9b59b5ff5997b09927b70628f24f7cfa3f8cb676-json.log",
	        "Name": "/embed-certs-20211231102953-6736",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-20211231102953-6736:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20211231102953-6736",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e-init/diff:/var/lib/docker/overlay2/c534a67c1f4c5b9f1d9e8fda7874029a0b48633454e02a0b6c810aa9cf70f36b/diff:/var/lib/docker/overlay2/de7a9fc119b7ade7c9f3b24bea5108d56a2268af67d230db0cccbbf8190cc502/diff:/var/lib/docker/overlay2/7dde3b10db81d42c3216201bd8676123e0e230bfa173b31e053af8fa206a800e/diff:/var/lib/docker/overlay2/97454ec819da74666dcc72f8e5b4653886c44f3699cecf13a018ecf95d935273/diff:/var/lib/docker/overlay2/f63f64cf22a9591cbcafeb284a265882ea890544f2b083b747047bcf47a45f11/diff:/var/lib/docker/overlay2/85a2123491b64d14f7493b3bc6a519d9df28cc3680976b9e5a7ecf783be5a304/diff:/var/lib/docker/overlay2/2ec15009b968007c3be12325edca4c12821b52fc616ace66d456038c7f9b8b26/diff:/var/lib/docker/overlay2/c45e9285ae7064da0be5f07d29b035ad54f64e529db00ffe6f2dce6ad543dccf/diff:/var/lib/docker/overlay2/d95000893ca52351a7804b110df43c1656b4bc7fbe4940c9e0ff2744233b1f93/diff:/var/lib/docker/overlay2/38e3bf
07bc49f88e1e7214644d31095a6178f5156c6a5ff3fbcaf8676336a52d/diff:/var/lib/docker/overlay2/fe45a9a5036ea98542c5fd7808e66f8b6ffe9d0c545de700f7cbe39d6280e0e3/diff:/var/lib/docker/overlay2/8cd90b043b30c66cbeacbe4ca21881d85717c6e16f1fe627e2eecdeaf461da6c/diff:/var/lib/docker/overlay2/7221b66a068ae65d9a1c089fe5c60e7230aa950b3d47eb7e90a1776c2a45931b/diff:/var/lib/docker/overlay2/86ce10ecfb90cfd8e1f7d722715353b823ddfa58b93eefa2c1efffff899916ee/diff:/var/lib/docker/overlay2/12ea1316fade88c6e1d2272d6dbeb629875ff96f4dd6f6ead0ff6a4e7e3a3067/diff:/var/lib/docker/overlay2/0fe276873166ac11302901f1bb2c547c71e23fac2197e717e789b6ce7d1eb3ce/diff:/var/lib/docker/overlay2/81bd863acb94ff576a1f1f046ee87693930a12209e915a09f2a720c77d1357f3/diff:/var/lib/docker/overlay2/8875d87537d51e137e2fbff41fe61dadeefa46057e0c93d257ec4754b5ba95f3/diff:/var/lib/docker/overlay2/86e75ae4c708ad2e78d21326ca3b7f552b7903d8c7a0a66e086feaacd3a8c002/diff:/var/lib/docker/overlay2/c688ba4459332409ab260296b96d566d186d307b81308fb3b59724fe6fed3c7d/diff:/var/lib/d
ocker/overlay2/45c4b946a887c9de4f592591b3e65041083baba75d5bed9b0995339de1cafd02/diff:/var/lib/docker/overlay2/3927af3e4dc5e4af94ac76cbe5a3b9759b80e10646aabc655f5c9b56ae13e650/diff:/var/lib/docker/overlay2/11dd3efafe062edcbd4dd837545d570514492786b08abf77f57fe300d225eaa5/diff:/var/lib/docker/overlay2/b8e0f4d5369eb0c633092d9e21501d99ccb27fafaad6667209e1d988c337ae9b/diff:/var/lib/docker/overlay2/c605476b22b48b266163ca6ee5fcb52b15458f1d7a77458390109b6e795f72f3/diff:/var/lib/docker/overlay2/19b31873fbc8858f6b143ea8dc98572f32110c70c29732285bdea89dd6163287/diff:/var/lib/docker/overlay2/171afcaaccc199dc3a653d1bf42453959e7c6bd7bc1e8340f94454b5f5fb6d2d/diff:/var/lib/docker/overlay2/2000b0fa8d671b6e34ec28228440eb7e6a5ddcd6b2cb657836d8522e3fb4454e/diff:/var/lib/docker/overlay2/1294bf56384681b45b611fe07d7b32c6c87a67eee568452d1f8123bf319f7a4c/diff:/var/lib/docker/overlay2/3199bb762e90968c1998250099f781c0e8954f4d62df39a80a384b9208149421/diff:/var/lib/docker/overlay2/036a54cd701c84ee783285b30f6cf33b92858f1a11ebd5db15cffffea12
adcd9/diff:/var/lib/docker/overlay2/552a166f9d1f0d79a7c45c08db35c6db9366e5d8e1321b62ffc3d8a9ea458797/diff:/var/lib/docker/overlay2/98f34e7af675026bd6dd9aaf43b996b5a992e04fe33f238d16d38e7ccad2fb91/diff:/var/lib/docker/overlay2/c1ed97ca4514bd37713c82daa6ba805285f12b5aa51599e9f915b8755fbf55a7/diff:/var/lib/docker/overlay2/86b8ba00550442fc509be298ff07205e99a171d56a70a2e1735cdfd4a9ab745e/diff:/var/lib/docker/overlay2/109092c1424dfee38a4eb63e39a9bb5d3de4b011f84e30bd021bf73ee2f47aa1/diff:/var/lib/docker/overlay2/87fd60ac9e5dd5a701def2c0a9e1c64463bd4779be06a990c86e5561a4349312/diff:/var/lib/docker/overlay2/145545da7fd578da8a41adec4cec3a9b347b4e609c900ee9365ae7735c7859c6/diff:/var/lib/docker/overlay2/e6a20d33de71269404b7fdc1fa6f915dcbc5a337e0625ef4536cb942bb55c19b/diff:/var/lib/docker/overlay2/38829872ccac03d0dae45b93d1f34e61fd179b6fc19f22fd27d4431ee773090f/diff:/var/lib/docker/overlay2/3c7fa7cc3994b36d6b984ebe7c46b535829f02814130486c4ecf5a0976f4f7dc/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/36d7a0b99e6f9eb48cffc64e609dec4e9753aa19eb4de8b261f2d49612ea9d7e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20211231102953-6736",
	                "Source": "/var/lib/docker/volumes/embed-certs-20211231102953-6736/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20211231102953-6736",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20211231102953-6736",
	                "name.minikube.sigs.k8s.io": "embed-certs-20211231102953-6736",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dfc73bddd32d4a580e80ede53e861c2019c40094c0f4bf8dbec95ea0223d20b5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49427"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49426"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49423"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49424"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dfc73bddd32d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20211231102953-6736": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "de3bee7bab0c",
	                        "embed-certs-20211231102953-6736"
	                    ],
	                    "NetworkID": "821d0d66bcf3a6ca41969ece76bf8b556f86e66628fb90783541e59bdec0e994",
	                    "EndpointID": "9ce1b9b9e6af1217d03fa31376bf39eb0632af0bb5247bc92fc3c48c1620d77a",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20211231102953-6736 -n embed-certs-20211231102953-6736

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20211231102953-6736 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-20211231102953-6736 logs -n 25: (1.050130011s)
helpers_test.go:253: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                | newest-cni-20211231103230-6736                 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:34:59 UTC | Fri, 31 Dec 2021 10:35:00 UTC |
	|         | newest-cni-20211231103230-6736                    |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:37:37 UTC | Fri, 31 Dec 2021 10:37:38 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:02 UTC | Fri, 31 Dec 2021 10:39:03 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:05 UTC | Fri, 31 Dec 2021 10:39:06 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:06 UTC | Fri, 31 Dec 2021 10:39:07 UTC |
	|         | old-k8s-version-20211231102602-6736               |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:07 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736               |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:28 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736               |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:57 UTC | Fri, 31 Dec 2021 10:42:58 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:59 UTC | Fri, 31 Dec 2021 10:43:00 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:01 UTC | Fri, 31 Dec 2021 10:43:02 UTC |
	|         | embed-certs-20211231102953-6736                   |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:02 UTC | Fri, 31 Dec 2021 10:43:22 UTC |
	|         | embed-certs-20211231102953-6736                   |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:22 UTC | Fri, 31 Dec 2021 10:43:22 UTC |
	|         | embed-certs-20211231102953-6736                   |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:44:18 UTC | Fri, 31 Dec 2021 10:44:19 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:40 UTC | Fri, 31 Dec 2021 10:45:41 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:42 UTC | Fri, 31 Dec 2021 10:45:43 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:44 UTC | Fri, 31 Dec 2021 10:45:45 UTC |
	|         | default-k8s-different-port-20211231103230-6736    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:45 UTC | Fri, 31 Dec 2021 10:46:05 UTC |
	|         | default-k8s-different-port-20211231103230-6736    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:46:05 UTC | Fri, 31 Dec 2021 10:46:05 UTC |
	|         | default-k8s-different-port-20211231103230-6736    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:48:11 UTC | Fri, 31 Dec 2021 10:48:12 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:50:55 UTC | Fri, 31 Dec 2021 10:50:56 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:53:21 UTC | Fri, 31 Dec 2021 10:53:22 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:57:14 UTC | Fri, 31 Dec 2021 10:57:15 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:59:58 UTC | Fri, 31 Dec 2021 10:59:59 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 11:02:24 UTC | Fri, 31 Dec 2021 11:02:25 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| delete  | -p                                                | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 11:02:26 UTC | Fri, 31 Dec 2021 11:02:29 UTC |
	|         | old-k8s-version-20211231102602-6736               |                                                |         |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/31 10:46:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.17.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1231 10:46:05.967243  259177 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:46:05.967352  259177 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:46:05.967357  259177 out.go:310] Setting ErrFile to fd 2...
	I1231 10:46:05.967361  259177 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:46:05.967469  259177 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:46:05.967735  259177 out.go:304] Setting JSON to false
	I1231 10:46:05.969104  259177 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5320,"bootTime":1640942245,"procs":269,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:46:05.969197  259177 start.go:122] virtualization: kvm guest
	I1231 10:46:05.973853  259177 out.go:176] * [default-k8s-different-port-20211231103230-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:46:05.974255  259177 notify.go:174] Checking for updates...
	I1231 10:46:05.977868  259177 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:46:05.981223  259177 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:46:05.983862  259177 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:05.986549  259177 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:46:05.989061  259177 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:46:05.990312  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:05.991362  259177 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:46:06.042846  259177 docker.go:132] docker version: linux-20.10.12
	I1231 10:46:06.042944  259177 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:46:06.151292  259177 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:46:06.079978563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:46:06.151415  259177 docker.go:237] overlay module found
	I1231 10:46:06.155844  259177 out.go:176] * Using the docker driver based on existing profile
	I1231 10:46:06.155884  259177 start.go:280] selected driver: docker
	I1231 10:46:06.155890  259177 start.go:795] validating driver "docker" against &{Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6
736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:tru
e default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:06.155998  259177 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:46:06.156009  259177 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:46:06.156018  259177 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:46:06.156049  259177 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:46:06.156072  259177 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:46:06.158635  259177 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:46:06.159406  259177 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:46:06.263549  259177 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:46:06.195249559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	W1231 10:46:06.263707  259177 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:46:06.263738  259177 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:46:06.266310  259177 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:46:06.266476  259177 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1231 10:46:06.266517  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:06.266529  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:06.266555  259177 start_flags.go:298] config:
	{Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] Star
tHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:06.269292  259177 out.go:176] * Starting control plane node default-k8s-different-port-20211231103230-6736 in cluster default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.269341  259177 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:46:06.271330  259177 out.go:176] * Pulling base image ...
	I1231 10:46:06.271382  259177 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:46:06.271427  259177 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4
	I1231 10:46:06.271440  259177 cache.go:57] Caching tarball of preloaded images
	I1231 10:46:06.271488  259177 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:46:06.271687  259177 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:46:06.271703  259177 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.1 on containerd
	I1231 10:46:06.271838  259177 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/config.json ...
	I1231 10:46:06.311882  259177 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:46:06.311924  259177 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:46:06.311934  259177 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:46:06.311964  259177 start.go:313] acquiring machines lock for default-k8s-different-port-20211231103230-6736: {Name:mk03f3b7e941bf8396158e471587c1c8924400ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:46:06.312088  259177 start.go:317] acquired machines lock for "default-k8s-different-port-20211231103230-6736" in 90µs
	I1231 10:46:06.312120  259177 start.go:93] Skipping create...Using existing machine configuration
	I1231 10:46:06.312127  259177 fix.go:55] fixHost starting: 
	I1231 10:46:06.312392  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:06.348944  259177 fix.go:108] recreateIfNeeded on default-k8s-different-port-20211231103230-6736: state=Stopped err=<nil>
	W1231 10:46:06.348985  259177 fix.go:134] unexpected machine state, will restart: <nil>
	I1231 10:46:04.629364  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:06.630805  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:06.352123  259177 out.go:176] * Restarting existing docker container for "default-k8s-different-port-20211231103230-6736" ...
	I1231 10:46:06.352210  259177 cli_runner.go:133] Run: docker start default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.813658  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:06.855625  259177 kic.go:420] container "default-k8s-different-port-20211231103230-6736" state is running.
	I1231 10:46:06.856072  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.893499  259177 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/config.json ...
	I1231 10:46:06.893751  259177 machine.go:88] provisioning docker machine ...
	I1231 10:46:06.893775  259177 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:06.893829  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.942482  259177 main.go:130] libmachine: Using SSH client type: native
	I1231 10:46:06.942732  259177 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I1231 10:46:06.942761  259177 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20211231103230-6736 && echo "default-k8s-different-port-20211231103230-6736" | sudo tee /etc/hostname
	I1231 10:46:06.944176  259177 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49220->127.0.0.1:49432: read: connection reset by peer
	I1231 10:46:10.090471  259177 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20211231103230-6736
	
	I1231 10:46:10.090566  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.126309  259177 main.go:130] libmachine: Using SSH client type: native
	I1231 10:46:10.126479  259177 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I1231 10:46:10.126500  259177 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20211231103230-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20211231103230-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20211231103230-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:46:10.265308  259177 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:46:10.265342  259177 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:46:10.265381  259177 ubuntu.go:177] setting up certificates
	I1231 10:46:10.265390  259177 provision.go:83] configureAuth start
	I1231 10:46:10.265438  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.304699  259177 provision.go:138] copyHostCerts
	I1231 10:46:10.304774  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:46:10.304797  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:46:10.304931  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:46:10.305111  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:46:10.305135  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:46:10.305188  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:46:10.305275  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:46:10.305288  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:46:10.305319  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:46:10.305369  259177 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20211231103230-6736 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20211231103230-6736]
	I1231 10:46:10.388915  259177 provision.go:172] copyRemoteCerts
	I1231 10:46:10.388990  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:46:10.389022  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.428508  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.524630  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:46:10.546323  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I1231 10:46:10.568067  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1231 10:46:10.589042  259177 provision.go:86] duration metric: configureAuth took 323.641763ms
	I1231 10:46:10.589074  259177 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:46:10.589309  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:10.589325  259177 machine.go:91] provisioned docker machine in 3.69555799s
	I1231 10:46:10.589336  259177 start.go:267] post-start starting for "default-k8s-different-port-20211231103230-6736" (driver="docker")
	I1231 10:46:10.589345  259177 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:46:10.589389  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:46:10.589435  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.626814  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.729130  259177 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:46:10.732760  259177 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:46:10.732791  259177 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:46:10.732799  259177 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:46:10.732804  259177 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:46:10.732813  259177 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:46:10.732873  259177 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:46:10.732955  259177 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:46:10.733065  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:46:10.741203  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:46:10.767998  259177 start.go:270] post-start completed in 178.644414ms
	I1231 10:46:10.768098  259177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:46:10.768143  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.814628  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:09.129448  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:11.130044  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:10.913687  259177 fix.go:57] fixHost completed within 4.601551226s
	I1231 10:46:10.913730  259177 start.go:80] releasing machines lock for "default-k8s-different-port-20211231103230-6736", held for 4.601625482s
	I1231 10:46:10.913830  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.952465  259177 ssh_runner.go:195] Run: systemctl --version
	I1231 10:46:10.952474  259177 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:46:10.952512  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.952544  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.995326  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.996790  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:11.093213  259177 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:46:11.117553  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:46:11.129631  259177 docker.go:158] disabling docker service ...
	I1231 10:46:11.129684  259177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:46:11.141035  259177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:46:11.151527  259177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:46:11.242410  259177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:46:11.332381  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:46:11.343726  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:46:11.360330  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:46:11.377258  259177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:46:11.386307  259177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:46:11.395035  259177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:46:11.484118  259177 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:46:11.570327  259177 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:46:11.570400  259177 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:46:11.575057  259177 start.go:458] Will wait 60s for crictl version
	I1231 10:46:11.575130  259177 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:46:11.606505  259177 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:46:11Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:46:13.130219  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:15.629571  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.654252  259177 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:46:22.679099  259177 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:46:22.679173  259177 ssh_runner.go:195] Run: containerd --version
	I1231 10:46:22.699642  259177 ssh_runner.go:195] Run: containerd --version
	I1231 10:46:22.724111  259177 out.go:176] * Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	I1231 10:46:22.724192  259177 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20211231103230-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:46:22.761739  259177 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1231 10:46:22.767723  259177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:46:18.129788  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:20.629221  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.629685  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.782449  259177 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:46:22.785523  259177 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:46:22.788150  259177 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:46:22.788429  259177 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:46:22.788579  259177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:46:22.817871  259177 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:46:22.817900  259177 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:46:22.817954  259177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:46:22.845121  259177 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:46:22.845143  259177 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:46:22.845184  259177 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:46:22.872122  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:22.872151  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:22.872177  259177 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:46:22.872197  259177 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.23.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20211231103230-6736 NodeName:default-k8s-different-port-20211231103230-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:46:22.872494  259177 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20211231103230-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:46:22.872594  259177 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=default-k8s-different-port-20211231103230-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1231 10:46:22.872650  259177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.1
	I1231 10:46:22.880952  259177 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:46:22.881023  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:46:22.889603  259177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (653 bytes)
	I1231 10:46:22.904645  259177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1231 10:46:22.919901  259177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1231 10:46:22.934205  259177 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:46:22.937824  259177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:46:22.948914  259177 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736 for IP: 192.168.67.2
	I1231 10:46:22.949067  259177 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:46:22.949116  259177 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:46:22.949182  259177 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/client.key
	I1231 10:46:22.949259  259177 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key.c7fa3a9e
	I1231 10:46:22.949302  259177 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.key
	I1231 10:46:22.949461  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:46:22.949511  259177 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:46:22.949519  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:46:22.949548  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:46:22.949577  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:46:22.949600  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:46:22.949638  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:46:22.950592  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:46:22.971400  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1231 10:46:22.991752  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:46:23.013577  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1231 10:46:23.034846  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:46:23.055398  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:46:23.077052  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:46:23.096743  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:46:23.116982  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:46:23.138029  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:46:23.158888  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:46:23.180681  259177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:46:23.196969  259177 ssh_runner.go:195] Run: openssl version
	I1231 10:46:23.202414  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:46:23.211307  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.215577  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.215648  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.221379  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:46:23.230004  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:46:23.240407  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.245087  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.245149  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.252376  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:46:23.261273  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:46:23.272187  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.276381  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.276439  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.282440  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:46:23.291931  259177 kubeadm.go:388] StartCluster: {Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true ex
tra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:23.292071  259177 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:46:23.292143  259177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:46:23.322595  259177 cri.go:87] found id: "2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29"
	I1231 10:46:23.322627  259177 cri.go:87] found id: "bd73d75d2e911560016e85d3108966fc37ee8b4b3657740aad59a2ef691ec869"
	I1231 10:46:23.322635  259177 cri.go:87] found id: "6f1fab877ff5d05fc062729318a8cb0acd49484779378aa54df1414cc54b604e"
	I1231 10:46:23.322641  259177 cri.go:87] found id: "631b3be24dd2b7667fa37162bef86df2c43e5cf7b5aac17096ee8f79bf749549"
	I1231 10:46:23.322646  259177 cri.go:87] found id: "a578cbf12a8e43444eec092845ea14c9a2f682965d287a9801a49dd0a7a8f11f"
	I1231 10:46:23.322652  259177 cri.go:87] found id: "78f1ab230e90175601afd355c7e642dc2f7730a2c82d7591eb54c79c0cd8fdcb"
	I1231 10:46:23.322658  259177 cri.go:87] found id: ""
	I1231 10:46:23.322704  259177 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1231 10:46:23.340383  259177 cri.go:114] JSON = null
	W1231 10:46:23.340448  259177 kubeadm.go:395] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I1231 10:46:23.340504  259177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:46:23.349236  259177 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1231 10:46:23.356926  259177 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.357908  259177 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20211231103230-6736" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:23.358387  259177 kubeconfig.go:127] "default-k8s-different-port-20211231103230-6736" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig - will repair!
	I1231 10:46:23.359199  259177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:23.361565  259177 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1231 10:46:23.369806  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.369877  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.387797  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.588254  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.588346  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.603506  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.788716  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.788802  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.805281  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.988608  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.988691  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.003485  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.188825  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.188911  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.204147  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.388346  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.388445  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.405511  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.587900  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.587979  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.603393  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.788613  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.788703  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.804658  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.988878  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.988986  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.004053  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.188400  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.188499  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.203668  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.388884  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.388958  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.404633  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.588928  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.588999  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.603675  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.787949  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.788030  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.803421  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.630080  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:27.131198  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:25.988170  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.988259  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.003489  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.188799  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.188874  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.206241  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.388554  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.388631  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.404946  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.404976  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.405015  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.421640  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W1231 10:46:26.421666  259177 kubeadm.go:600] needs reconfigure: apiserver error: timed out waiting for the condition
	I1231 10:46:26.421696  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1231 10:46:27.164679  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:46:27.176052  259177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:46:27.184966  259177 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:46:27.185023  259177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:46:27.192809  259177 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:46:27.192859  259177 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:46:29.629674  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:31.629826  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:33.630236  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:35.630628  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:41.573231  259177 out.go:203]   - Generating certificates and keys ...
	I1231 10:46:41.577004  259177 out.go:203]   - Booting up control plane ...
	I1231 10:46:41.580458  259177 out.go:203]   - Configuring RBAC rules ...
	I1231 10:46:41.582339  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:41.582359  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:38.131557  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:40.628931  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:42.629291  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:41.584853  259177 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:46:41.584972  259177 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:46:41.589132  259177 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.1/kubectl ...
	I1231 10:46:41.589160  259177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:46:41.603970  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:46:42.303042  259177 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:46:42.303113  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.303143  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=default-k8s-different-port-20211231103230-6736 minikube.k8s.io/updated_at=2021_12_31T10_46_42_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.323470  259177 ops.go:34] apiserver oom_adj: -16
	I1231 10:46:42.425794  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.997478  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:43.497994  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:43.997336  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.497461  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.997469  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:45.497141  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.629588  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:46.630233  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:45.996911  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:46.497554  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:46.996878  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:47.497472  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:47.997537  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:48.496954  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:48.997559  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.497163  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.997251  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:50.496922  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.129073  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:51.129976  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:50.997663  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:51.497167  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:51.996864  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:52.497527  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:52.997090  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:53.497799  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:53.997130  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:54.105111  259177 kubeadm.go:864] duration metric: took 11.802054034s to wait for elevateKubeSystemPrivileges.
	I1231 10:46:54.105148  259177 kubeadm.go:390] StartCluster complete in 30.81326231s
	I1231 10:46:54.105171  259177 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:54.105289  259177 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:54.108579  259177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:54.630591  259177 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20211231103230-6736" rescaled to 1
	I1231 10:46:54.630781  259177 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:46:54.634169  259177 out.go:176] * Verifying Kubernetes components...
	I1231 10:46:54.631217  259177 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I1231 10:46:54.634291  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:46:54.634321  259177 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634357  259177 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634365  259177 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:46:54.634362  259177 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.631547  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:54.631838  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:46:54.634371  259177 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634395  259177 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634410  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634414  259177 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634427  259177 addons.go:165] addon metrics-server should already be in state true
	I1231 10:46:54.634489  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634516  259177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634487  259177 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634558  259177 addons.go:165] addon dashboard should already be in state true
	I1231 10:46:54.634593  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634970  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635065  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635066  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635106  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.685830  259177 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20211231103230-6736" to be "Ready" ...
	I1231 10:46:54.699606  259177 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.699638  259177 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:46:54.699668  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.700169  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.704286  259177 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:46:54.704439  259177 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:46:54.707989  259177 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1231 10:46:54.704464  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:46:54.708151  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.708200  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1231 10:46:54.708461  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1231 10:46:54.708547  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.719755  259177 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1231 10:46:54.722183  259177 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1231 10:46:54.722299  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1231 10:46:54.722320  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1231 10:46:54.722396  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.765928  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.766391  259177 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:46:54.766411  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:46:54.766457  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.767368  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.784353  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.795209  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:46:54.825887  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:55.079913  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1231 10:46:55.079940  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1231 10:46:55.081041  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:46:55.101407  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:46:55.101636  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1231 10:46:55.101669  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1231 10:46:55.182759  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1231 10:46:55.182791  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1231 10:46:55.192507  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1231 10:46:55.192539  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1231 10:46:55.210370  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:46:55.210394  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1231 10:46:55.280215  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1231 10:46:55.280281  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1231 10:46:55.292923  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:46:55.303427  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1231 10:46:55.303453  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1231 10:46:55.390564  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1231 10:46:55.390603  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1231 10:46:55.488578  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1231 10:46:55.488607  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1231 10:46:55.493706  259177 start.go:773] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I1231 10:46:55.509144  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1231 10:46:55.509179  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1231 10:46:55.591504  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1231 10:46:55.591541  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1231 10:46:55.611957  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:46:55.611987  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1231 10:46:55.692647  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:46:56.280527  259177 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:56.694434  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:46:57.215361  259177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.522655734s)
	I1231 10:46:53.130645  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:55.628936  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:57.630008  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:57.218195  259177 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1231 10:46:57.218237  259177 addons.go:417] enableAddons completed in 2.587057511s
	I1231 10:46:59.194381  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:00.129798  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:02.629613  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:01.195147  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:03.195279  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:05.694625  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:05.129931  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:07.629077  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:08.193944  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:10.195307  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:09.629227  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:12.129630  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:12.695675  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:15.195343  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:14.629484  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:17.129904  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:17.693570  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:19.693815  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:19.629766  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:21.630502  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:21.694204  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:23.694541  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:25.694753  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:24.130302  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:26.629329  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:28.193820  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:30.694197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:28.630092  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:30.630190  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:32.694753  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:35.194657  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:33.129099  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:35.129944  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:37.629071  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:37.694883  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:40.194845  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:39.629460  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:41.629639  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:42.694778  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:45.194603  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:44.129618  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:46.628987  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:47.693566  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:49.694322  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:48.629810  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:51.129720  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:52.194896  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:54.694088  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:53.629970  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:55.630008  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:56.694160  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:58.694485  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:00.695020  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:58.129386  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:00.629184  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:02.630921  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:03.194853  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:05.195360  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:05.129367  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:07.629362  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:07.197249  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:09.693805  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:09.629727  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:11.132106  253675 node_ready.go:38] duration metric: took 4m0.023004433s waiting for node "embed-certs-20211231102953-6736" to be "Ready" ...
	I1231 10:48:11.135160  253675 out.go:176] 
	W1231 10:48:11.135314  253675 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:48:11.135327  253675 out.go:241] * 
	W1231 10:48:11.136204  253675 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1231 10:48:11.694445  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:13.694502  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:15.695650  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:18.193879  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:20.194982  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:22.694025  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:24.695099  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:26.695542  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:29.193781  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:31.196683  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:33.693756  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:35.694396  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:37.694612  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:40.194187  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:42.194624  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:44.694809  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:47.194396  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:49.194998  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:51.693920  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:54.194059  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:56.194946  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:58.694370  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:01.194666  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:03.693794  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:05.693979  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:07.694832  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:10.193727  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:12.196818  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:14.694785  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:17.194197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:19.195220  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:21.693956  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:24.193839  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:26.194434  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:28.694344  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:31.194654  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:33.694374  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:36.194938  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:38.693771  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:40.694132  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:42.694270  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:44.694407  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:47.194463  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:49.194582  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:51.694082  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:53.694472  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:56.194226  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:58.194871  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:00.694067  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:02.694491  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:04.695370  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:07.193813  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:09.194857  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:11.693897  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:13.694405  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:15.695280  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:18.194197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:20.694632  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:23.193998  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:25.194494  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:27.194919  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:29.693725  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:31.694052  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:34.194397  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:36.695239  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:39.194905  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:41.693646  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:43.694494  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:46.193735  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:48.194245  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:50.694846  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:53.194883  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:54.696282  259177 node_ready.go:38] duration metric: took 4m0.010315548s waiting for node "default-k8s-different-port-20211231103230-6736" to be "Ready" ...
	I1231 10:50:54.698863  259177 out.go:176] 
	W1231 10:50:54.699054  259177 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:50:54.699075  259177 out.go:241] * 
	W1231 10:50:54.699945  259177 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	b9032aa24a223       6de166512aa22       27 seconds ago      Running             kindnet-cni               6                   b2ae6303a3d29
	6a6c3adc4dbd0       6de166512aa22       5 minutes ago       Exited              kindnet-cni               5                   b2ae6303a3d29
	62c0d868eb022       b46c42588d511       22 minutes ago      Running             kube-proxy                0                   7d87514afca2c
	8896082530359       71d575efe6283       22 minutes ago      Running             kube-scheduler            1                   caee512e50be5
	bedf4fc421a5b       f51846a4fd288       22 minutes ago      Running             kube-controller-manager   1                   23a53efe4f57a
	5e4fcc10f62c1       b6d7abedde399       22 minutes ago      Running             kube-apiserver            1                   5526833e62549
	e2c98b3c8c237       25f8c7f3da61c       22 minutes ago      Running             etcd                      1                   9e5cd803bf1ec
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-12-31 10:43:23 UTC, end at Fri 2021-12-31 11:06:18 UTC. --
	Dec 31 10:56:16 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:56:16.645342329Z" level=info msg="CreateContainer within sandbox \"b2ae6303a3d29623c4e30b092f55ba5537366b31656055b31b8b79a12dbe39ff\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"1006b19bf3eaa88576d99e16a677d7f66323bd650532f34914f74a1211968498\""
	Dec 31 10:56:16 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:56:16.646022333Z" level=info msg="StartContainer for \"1006b19bf3eaa88576d99e16a677d7f66323bd650532f34914f74a1211968498\""
	Dec 31 10:56:16 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:56:16.897928524Z" level=info msg="StartContainer for \"1006b19bf3eaa88576d99e16a677d7f66323bd650532f34914f74a1211968498\" returns successfully"
	Dec 31 10:58:57 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:58:57.197396573Z" level=info msg="Finish piping stdout of container \"1006b19bf3eaa88576d99e16a677d7f66323bd650532f34914f74a1211968498\""
	Dec 31 10:58:57 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:58:57.197409323Z" level=info msg="Finish piping stderr of container \"1006b19bf3eaa88576d99e16a677d7f66323bd650532f34914f74a1211968498\""
	Dec 31 10:58:57 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:58:57.198199505Z" level=info msg="TaskExit event &TaskExit{ContainerID:1006b19bf3eaa88576d99e16a677d7f66323bd650532f34914f74a1211968498,ID:1006b19bf3eaa88576d99e16a677d7f66323bd650532f34914f74a1211968498,Pid:2827,ExitStatus:2,ExitedAt:2021-12-31 10:58:57.197958642 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:58:57 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:58:57.225294992Z" level=info msg="shim disconnected" id=1006b19bf3eaa88576d99e16a677d7f66323bd650532f34914f74a1211968498
	Dec 31 10:58:57 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:58:57.225394258Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:58:57 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:58:57.458121651Z" level=info msg="RemoveContainer for \"a22d65cfe95fb98ab13d15673eedb7448959fae3ae7c58673adb3b498caa501b\""
	Dec 31 10:58:57 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T10:58:57.465587152Z" level=info msg="RemoveContainer for \"a22d65cfe95fb98ab13d15673eedb7448959fae3ae7c58673adb3b498caa501b\" returns successfully"
	Dec 31 11:00:20 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T11:00:20.614263133Z" level=info msg="CreateContainer within sandbox \"b2ae6303a3d29623c4e30b092f55ba5537366b31656055b31b8b79a12dbe39ff\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:5,}"
	Dec 31 11:00:20 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T11:00:20.645153773Z" level=info msg="CreateContainer within sandbox \"b2ae6303a3d29623c4e30b092f55ba5537366b31656055b31b8b79a12dbe39ff\" for &ContainerMetadata{Name:kindnet-cni,Attempt:5,} returns container id \"6a6c3adc4dbd089baf5aabad30a28b9a66b2198c8d33fdf9291d6ae13210ca35\""
	Dec 31 11:00:20 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T11:00:20.645809840Z" level=info msg="StartContainer for \"6a6c3adc4dbd089baf5aabad30a28b9a66b2198c8d33fdf9291d6ae13210ca35\""
	Dec 31 11:00:20 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T11:00:20.804871324Z" level=info msg="StartContainer for \"6a6c3adc4dbd089baf5aabad30a28b9a66b2198c8d33fdf9291d6ae13210ca35\" returns successfully"
	Dec 31 11:03:01 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T11:03:01.100298983Z" level=info msg="Finish piping stderr of container \"6a6c3adc4dbd089baf5aabad30a28b9a66b2198c8d33fdf9291d6ae13210ca35\""
	Dec 31 11:03:01 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T11:03:01.100326716Z" level=info msg="Finish piping stdout of container \"6a6c3adc4dbd089baf5aabad30a28b9a66b2198c8d33fdf9291d6ae13210ca35\""
	Dec 31 11:03:01 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T11:03:01.101316704Z" level=info msg="TaskExit event &TaskExit{ContainerID:6a6c3adc4dbd089baf5aabad30a28b9a66b2198c8d33fdf9291d6ae13210ca35,ID:6a6c3adc4dbd089baf5aabad30a28b9a66b2198c8d33fdf9291d6ae13210ca35,Pid:3181,ExitStatus:2,ExitedAt:2021-12-31 11:03:01.100938382 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 11:03:01 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T11:03:01.130832935Z" level=info msg="shim disconnected" id=6a6c3adc4dbd089baf5aabad30a28b9a66b2198c8d33fdf9291d6ae13210ca35
	Dec 31 11:03:01 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T11:03:01.130930384Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 11:03:01 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T11:03:01.916507475Z" level=info msg="RemoveContainer for \"1006b19bf3eaa88576d99e16a677d7f66323bd650532f34914f74a1211968498\""
	Dec 31 11:03:01 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T11:03:01.923084841Z" level=info msg="RemoveContainer for \"1006b19bf3eaa88576d99e16a677d7f66323bd650532f34914f74a1211968498\" returns successfully"
	Dec 31 11:05:50 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T11:05:50.614092311Z" level=info msg="CreateContainer within sandbox \"b2ae6303a3d29623c4e30b092f55ba5537366b31656055b31b8b79a12dbe39ff\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:6,}"
	Dec 31 11:05:50 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T11:05:50.643636987Z" level=info msg="CreateContainer within sandbox \"b2ae6303a3d29623c4e30b092f55ba5537366b31656055b31b8b79a12dbe39ff\" for &ContainerMetadata{Name:kindnet-cni,Attempt:6,} returns container id \"b9032aa24a223d0ad0881931554f74934d6e09f8078be412b64df8d2160a61e3\""
	Dec 31 11:05:50 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T11:05:50.644305875Z" level=info msg="StartContainer for \"b9032aa24a223d0ad0881931554f74934d6e09f8078be412b64df8d2160a61e3\""
	Dec 31 11:05:50 embed-certs-20211231102953-6736 containerd[342]: time="2021-12-31T11:05:50.884108813Z" level=info msg="StartContainer for \"b9032aa24a223d0ad0881931554f74934d6e09f8078be412b64df8d2160a61e3\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20211231102953-6736
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20211231102953-6736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb
	                    minikube.k8s.io/name=embed-certs-20211231102953-6736
	                    minikube.k8s.io/updated_at=2021_12_31T10_43_59_0700
	                    minikube.k8s.io/version=v1.24.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 Dec 2021 10:43:55 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20211231102953-6736
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 Dec 2021 11:06:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 Dec 2021 11:04:40 +0000   Fri, 31 Dec 2021 10:43:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 Dec 2021 11:04:40 +0000   Fri, 31 Dec 2021 10:43:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 Dec 2021 11:04:40 +0000   Fri, 31 Dec 2021 10:43:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 31 Dec 2021 11:04:40 +0000   Fri, 31 Dec 2021 10:43:52 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20211231102953-6736
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	  System UUID:                df6948c7-cd35-4573-a0b7-f7c0ae501659
	  Boot ID:                    e6f4bca3-a61e-4e63-a68d-0c73a5c2c999
	  Kernel Version:             5.11.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.12
	  Kubelet Version:            v1.23.1
	  Kube-Proxy Version:         v1.23.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20211231102953-6736                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kindnet-sz9gt                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      22m
	  kube-system                 kube-apiserver-embed-certs-20211231102953-6736             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-embed-certs-20211231102953-6736    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-bf6l7                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-embed-certs-20211231102953-6736             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 22m                kube-proxy  
	  Normal  Starting                 22m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x4 over 22m)  kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x4 over 22m)  kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x3 over 22m)  kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m                kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet     Node embed-certs-20211231102953-6736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 da bc c6 d3 d8 08 06
	[  +0.905080] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vetha337dd59
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 d5 c5 d8 9b b7 08 06
	[  +0.767220] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethad0df3e8
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 45 74 46 b2 ed 08 06
	[  +0.175235] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethfa30cc3c
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 f5 8d 27 39 52 08 06
	[  +2.267937] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.022406] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.024047] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.951751] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.011878] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023951] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.963808] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.003832] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023918] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	
	* 
	* ==> etcd [e2c98b3c8c23748a45dcacedd39e95616ad8442e36bb6b6fda207f3c9cd41381] <==
	* {"level":"info","ts":"2021-12-31T10:43:52.010Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-12-31T10:43:52.380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2021-12-31T10:43:52.380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2021-12-31T10:43:52.380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2021-12-31T10:43:52.380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2021-12-31T10:43:52.380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2021-12-31T10:43:52.380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2021-12-31T10:43:52.380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2021-12-31T10:43:52.381Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:embed-certs-20211231102953-6736 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-12-31T10:43:52.382Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-12-31T10:43:52.383Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2021-12-31T10:43:52.384Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-12-31T10:53:53.198Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":662}
	{"level":"info","ts":"2021-12-31T10:53:53.199Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":662,"took":"652.758µs"}
	{"level":"info","ts":"2021-12-31T10:58:53.204Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":766}
	{"level":"info","ts":"2021-12-31T10:58:53.205Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":766,"took":"355.809µs"}
	{"level":"info","ts":"2021-12-31T11:03:53.208Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":868}
	{"level":"info","ts":"2021-12-31T11:03:53.209Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":868,"took":"295.557µs"}
	
	* 
	* ==> kernel <==
	*  11:06:18 up  1:48,  0 users,  load average: 0.59, 0.55, 0.84
	Linux embed-certs-20211231102953-6736 5.11.0-1023-gcp #25~20.04.1-Ubuntu SMP Mon Nov 15 15:54:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [5e4fcc10f62c1036c54aadc249c4bd994a626fae29d5142d5ec3303290197b95] <==
	* I1231 10:54:56.112849       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 10:56:56.114043       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 10:56:56.114128       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:56:56.114136       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 10:58:56.117654       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 10:58:56.117747       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:58:56.117755       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 10:59:56.118360       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 10:59:56.118432       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:59:56.118440       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 11:01:56.119336       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 11:01:56.119423       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 11:01:56.119431       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 11:03:56.125189       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 11:03:56.125283       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 11:03:56.125294       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 11:04:56.125705       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 11:04:56.126063       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 11:04:56.126089       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [bedf4fc421a5b6eb6f42473129bdc4ccad70191cd113ec514f00efa708d19047] <==
	* W1231 11:00:11.255324       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:00:40.805898       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:00:41.274439       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:01:10.824602       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:01:11.291094       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:01:40.845122       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:01:41.308432       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:02:10.871706       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:02:11.328018       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:02:40.890299       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:02:41.347070       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:03:10.901197       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:03:11.362683       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:03:40.910220       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:03:41.382276       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:04:10.927335       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:04:11.399295       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:04:40.946791       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:04:41.416049       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:05:10.963451       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:05:11.433309       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:05:40.974946       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:05:41.450533       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:06:10.993045       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:06:11.468748       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [62c0d868eb022292299288a2d75ff0a1b7915bda2773f4a9103f725d6f43f491] <==
	* I1231 10:44:12.607400       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I1231 10:44:12.607485       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I1231 10:44:12.607591       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1231 10:44:12.801716       1 server_others.go:206] "Using iptables Proxier"
	I1231 10:44:12.802050       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1231 10:44:12.802136       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1231 10:44:12.802156       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1231 10:44:12.802606       1 server.go:656] "Version info" version="v1.23.1"
	I1231 10:44:12.803335       1 config.go:317] "Starting service config controller"
	I1231 10:44:12.803356       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1231 10:44:12.803492       1 config.go:226] "Starting endpoint slice config controller"
	I1231 10:44:12.803697       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1231 10:44:12.904704       1 shared_informer.go:247] Caches are synced for service config 
	I1231 10:44:12.907142       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [889608253035982887397c394e7ec41a7768efbf6a0e85f40e25bcc483a2df07] <==
	* W1231 10:43:55.191217       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1231 10:43:55.191299       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1231 10:43:55.193121       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1231 10:43:55.193371       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1231 10:43:55.193600       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:43:55.193679       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1231 10:43:55.194203       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:43:55.194299       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1231 10:43:55.194319       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1231 10:43:55.194336       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1231 10:43:55.194512       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:43:55.194595       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1231 10:43:55.194973       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:43:55.195211       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1231 10:43:55.195010       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1231 10:43:55.195470       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1231 10:43:56.133167       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1231 10:43:56.133248       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1231 10:43:56.183335       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1231 10:43:56.183397       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1231 10:43:56.356344       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1231 10:43:56.356384       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1231 10:43:56.381076       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:43:56.381112       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1231 10:43:59.289785       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-12-31 10:43:23 UTC, end at Fri 2021-12-31 11:06:18 UTC. --
	Dec 31 11:04:57 embed-certs-20211231102953-6736 kubelet[1376]: I1231 11:04:57.611457    1376 scope.go:110] "RemoveContainer" containerID="6a6c3adc4dbd089baf5aabad30a28b9a66b2198c8d33fdf9291d6ae13210ca35"
	Dec 31 11:04:57 embed-certs-20211231102953-6736 kubelet[1376]: E1231 11:04:57.611773    1376 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kindnet-cni pod=kindnet-sz9gt_kube-system(a4e2bd9c-691b-43fc-99f8-6e269c1c58ea)\"" pod="kube-system/kindnet-sz9gt" podUID=a4e2bd9c-691b-43fc-99f8-6e269c1c58ea
	Dec 31 11:04:59 embed-certs-20211231102953-6736 kubelet[1376]: E1231 11:04:59.115104    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:05:04 embed-certs-20211231102953-6736 kubelet[1376]: E1231 11:05:04.116429    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:05:09 embed-certs-20211231102953-6736 kubelet[1376]: E1231 11:05:09.118034    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:05:10 embed-certs-20211231102953-6736 kubelet[1376]: I1231 11:05:10.611315    1376 scope.go:110] "RemoveContainer" containerID="6a6c3adc4dbd089baf5aabad30a28b9a66b2198c8d33fdf9291d6ae13210ca35"
	Dec 31 11:05:10 embed-certs-20211231102953-6736 kubelet[1376]: E1231 11:05:10.611617    1376 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kindnet-cni pod=kindnet-sz9gt_kube-system(a4e2bd9c-691b-43fc-99f8-6e269c1c58ea)\"" pod="kube-system/kindnet-sz9gt" podUID=a4e2bd9c-691b-43fc-99f8-6e269c1c58ea
	Dec 31 11:05:14 embed-certs-20211231102953-6736 kubelet[1376]: E1231 11:05:14.118772    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:05:19 embed-certs-20211231102953-6736 kubelet[1376]: E1231 11:05:19.119674    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:05:23 embed-certs-20211231102953-6736 kubelet[1376]: I1231 11:05:23.611555    1376 scope.go:110] "RemoveContainer" containerID="6a6c3adc4dbd089baf5aabad30a28b9a66b2198c8d33fdf9291d6ae13210ca35"
	Dec 31 11:05:23 embed-certs-20211231102953-6736 kubelet[1376]: E1231 11:05:23.611831    1376 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kindnet-cni pod=kindnet-sz9gt_kube-system(a4e2bd9c-691b-43fc-99f8-6e269c1c58ea)\"" pod="kube-system/kindnet-sz9gt" podUID=a4e2bd9c-691b-43fc-99f8-6e269c1c58ea
	Dec 31 11:05:24 embed-certs-20211231102953-6736 kubelet[1376]: E1231 11:05:24.120459    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:05:29 embed-certs-20211231102953-6736 kubelet[1376]: E1231 11:05:29.121872    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:05:34 embed-certs-20211231102953-6736 kubelet[1376]: E1231 11:05:34.123246    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:05:37 embed-certs-20211231102953-6736 kubelet[1376]: I1231 11:05:37.612171    1376 scope.go:110] "RemoveContainer" containerID="6a6c3adc4dbd089baf5aabad30a28b9a66b2198c8d33fdf9291d6ae13210ca35"
	Dec 31 11:05:37 embed-certs-20211231102953-6736 kubelet[1376]: E1231 11:05:37.612635    1376 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kindnet-cni pod=kindnet-sz9gt_kube-system(a4e2bd9c-691b-43fc-99f8-6e269c1c58ea)\"" pod="kube-system/kindnet-sz9gt" podUID=a4e2bd9c-691b-43fc-99f8-6e269c1c58ea
	Dec 31 11:05:39 embed-certs-20211231102953-6736 kubelet[1376]: E1231 11:05:39.124261    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:05:44 embed-certs-20211231102953-6736 kubelet[1376]: E1231 11:05:44.125289    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:05:49 embed-certs-20211231102953-6736 kubelet[1376]: E1231 11:05:49.126373    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:05:50 embed-certs-20211231102953-6736 kubelet[1376]: I1231 11:05:50.611848    1376 scope.go:110] "RemoveContainer" containerID="6a6c3adc4dbd089baf5aabad30a28b9a66b2198c8d33fdf9291d6ae13210ca35"
	Dec 31 11:05:54 embed-certs-20211231102953-6736 kubelet[1376]: E1231 11:05:54.127392    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:05:59 embed-certs-20211231102953-6736 kubelet[1376]: E1231 11:05:59.128471    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:06:04 embed-certs-20211231102953-6736 kubelet[1376]: E1231 11:06:04.130115    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:06:09 embed-certs-20211231102953-6736 kubelet[1376]: E1231 11:06:09.131043    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:06:14 embed-certs-20211231102953-6736 kubelet[1376]: E1231 11:06:14.132757    1376 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20211231102953-6736 -n embed-certs-20211231102953-6736
helpers_test.go:262: (dbg) Run:  kubectl --context embed-certs-20211231102953-6736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
helpers_test.go:271: non-running pods: coredns-64897985d-4t6g6 metrics-server-7f49dcbd7-fwqnh storage-provisioner dashboard-metrics-scraper-56974995fc-8ctl7 kubernetes-dashboard-ccd587f44-dwbwm
helpers_test.go:273: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context embed-certs-20211231102953-6736 describe pod coredns-64897985d-4t6g6 metrics-server-7f49dcbd7-fwqnh storage-provisioner dashboard-metrics-scraper-56974995fc-8ctl7 kubernetes-dashboard-ccd587f44-dwbwm
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context embed-certs-20211231102953-6736 describe pod coredns-64897985d-4t6g6 metrics-server-7f49dcbd7-fwqnh storage-provisioner dashboard-metrics-scraper-56974995fc-8ctl7 kubernetes-dashboard-ccd587f44-dwbwm: exit status 1 (90.520043ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-4t6g6" not found
	Error from server (NotFound): pods "metrics-server-7f49dcbd7-fwqnh" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-8ctl7" not found
	Error from server (NotFound): pods "kubernetes-dashboard-ccd587f44-dwbwm" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context embed-certs-20211231102953-6736 describe pod coredns-64897985d-4t6g6 metrics-server-7f49dcbd7-fwqnh storage-provisioner dashboard-metrics-scraper-56974995fc-8ctl7 kubernetes-dashboard-ccd587f44-dwbwm: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (542.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-gqg75" [4d8537b1-da9d-42ab-a91b-3ef7858d6421] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:03:29.444713    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:04:41.557049    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:06:01.436742    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.crt: no such file or directory
E1231 11:06:01.597368    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:06:02.559280    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:06:06.400348    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:06:11.520659    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:06:13.450074    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/enable-default-cni-20211231101406-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:06:21.760902    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:06:32.489816    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:06:42.241467    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:06:59.313235    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:07:23.202500    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:07:39.269489    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:08:29.444032    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:08:45.122970    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/old-k8s-version-20211231102602-6736/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E1231 11:08:46.992607    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/no-preload-20211231102928-6736/client.crt: no such file or directory
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:328: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20211231103230-6736 -n default-k8s-different-port-20211231103230-6736
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2021-12-31 11:09:00.839238153 +0000 UTC m=+5239.821054084
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-different-port-20211231103230-6736 describe po kubernetes-dashboard-ccd587f44-gqg75 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20211231103230-6736 describe po kubernetes-dashboard-ccd587f44-gqg75 -n kubernetes-dashboard: context deadline exceeded (1.155µs)
start_stop_delete_test.go:272: kubectl --context default-k8s-different-port-20211231103230-6736 describe po kubernetes-dashboard-ccd587f44-gqg75 -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-different-port-20211231103230-6736 logs kubernetes-dashboard-ccd587f44-gqg75 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20211231103230-6736 logs kubernetes-dashboard-ccd587f44-gqg75 -n kubernetes-dashboard: context deadline exceeded (241ns)
start_stop_delete_test.go:272: kubectl --context default-k8s-different-port-20211231103230-6736 logs kubernetes-dashboard-ccd587f44-gqg75 -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20211231103230-6736 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20211231103230-6736 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (208ns)
start_stop_delete_test.go:278: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-different-port-20211231103230-6736 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:282: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect default-k8s-different-port-20211231103230-6736
helpers_test.go:236: (dbg) docker inspect default-k8s-different-port-20211231103230-6736:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1",
	        "Created": "2021-12-31T10:32:50.365330019Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 259442,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-12-31T10:46:06.801649957Z",
	            "FinishedAt": "2021-12-31T10:46:05.341862453Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/hostname",
	        "HostsPath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/hosts",
	        "LogPath": "/var/lib/docker/containers/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1/282fb8467680469f70cf3e8c08a0ff3d83ec42cf15e60c31d2a06ec4c8f9c7b1-json.log",
	        "Name": "/default-k8s-different-port-20211231103230-6736",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20211231103230-6736:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20211231103230-6736",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5-init/diff:/var/lib/docker/overlay2/c534a67c1f4c5b9f1d9e8fda7874029a0b48633454e02a0b6c810aa9cf70f36b/diff:/var/lib/docker/overlay2/de7a9fc119b7ade7c9f3b24bea5108d56a2268af67d230db0cccbbf8190cc502/diff:/var/lib/docker/overlay2/7dde3b10db81d42c3216201bd8676123e0e230bfa173b31e053af8fa206a800e/diff:/var/lib/docker/overlay2/97454ec819da74666dcc72f8e5b4653886c44f3699cecf13a018ecf95d935273/diff:/var/lib/docker/overlay2/f63f64cf22a9591cbcafeb284a265882ea890544f2b083b747047bcf47a45f11/diff:/var/lib/docker/overlay2/85a2123491b64d14f7493b3bc6a519d9df28cc3680976b9e5a7ecf783be5a304/diff:/var/lib/docker/overlay2/2ec15009b968007c3be12325edca4c12821b52fc616ace66d456038c7f9b8b26/diff:/var/lib/docker/overlay2/c45e9285ae7064da0be5f07d29b035ad54f64e529db00ffe6f2dce6ad543dccf/diff:/var/lib/docker/overlay2/d95000893ca52351a7804b110df43c1656b4bc7fbe4940c9e0ff2744233b1f93/diff:/var/lib/docker/overlay2/38e3bf
07bc49f88e1e7214644d31095a6178f5156c6a5ff3fbcaf8676336a52d/diff:/var/lib/docker/overlay2/fe45a9a5036ea98542c5fd7808e66f8b6ffe9d0c545de700f7cbe39d6280e0e3/diff:/var/lib/docker/overlay2/8cd90b043b30c66cbeacbe4ca21881d85717c6e16f1fe627e2eecdeaf461da6c/diff:/var/lib/docker/overlay2/7221b66a068ae65d9a1c089fe5c60e7230aa950b3d47eb7e90a1776c2a45931b/diff:/var/lib/docker/overlay2/86ce10ecfb90cfd8e1f7d722715353b823ddfa58b93eefa2c1efffff899916ee/diff:/var/lib/docker/overlay2/12ea1316fade88c6e1d2272d6dbeb629875ff96f4dd6f6ead0ff6a4e7e3a3067/diff:/var/lib/docker/overlay2/0fe276873166ac11302901f1bb2c547c71e23fac2197e717e789b6ce7d1eb3ce/diff:/var/lib/docker/overlay2/81bd863acb94ff576a1f1f046ee87693930a12209e915a09f2a720c77d1357f3/diff:/var/lib/docker/overlay2/8875d87537d51e137e2fbff41fe61dadeefa46057e0c93d257ec4754b5ba95f3/diff:/var/lib/docker/overlay2/86e75ae4c708ad2e78d21326ca3b7f552b7903d8c7a0a66e086feaacd3a8c002/diff:/var/lib/docker/overlay2/c688ba4459332409ab260296b96d566d186d307b81308fb3b59724fe6fed3c7d/diff:/var/lib/d
ocker/overlay2/45c4b946a887c9de4f592591b3e65041083baba75d5bed9b0995339de1cafd02/diff:/var/lib/docker/overlay2/3927af3e4dc5e4af94ac76cbe5a3b9759b80e10646aabc655f5c9b56ae13e650/diff:/var/lib/docker/overlay2/11dd3efafe062edcbd4dd837545d570514492786b08abf77f57fe300d225eaa5/diff:/var/lib/docker/overlay2/b8e0f4d5369eb0c633092d9e21501d99ccb27fafaad6667209e1d988c337ae9b/diff:/var/lib/docker/overlay2/c605476b22b48b266163ca6ee5fcb52b15458f1d7a77458390109b6e795f72f3/diff:/var/lib/docker/overlay2/19b31873fbc8858f6b143ea8dc98572f32110c70c29732285bdea89dd6163287/diff:/var/lib/docker/overlay2/171afcaaccc199dc3a653d1bf42453959e7c6bd7bc1e8340f94454b5f5fb6d2d/diff:/var/lib/docker/overlay2/2000b0fa8d671b6e34ec28228440eb7e6a5ddcd6b2cb657836d8522e3fb4454e/diff:/var/lib/docker/overlay2/1294bf56384681b45b611fe07d7b32c6c87a67eee568452d1f8123bf319f7a4c/diff:/var/lib/docker/overlay2/3199bb762e90968c1998250099f781c0e8954f4d62df39a80a384b9208149421/diff:/var/lib/docker/overlay2/036a54cd701c84ee783285b30f6cf33b92858f1a11ebd5db15cffffea12
adcd9/diff:/var/lib/docker/overlay2/552a166f9d1f0d79a7c45c08db35c6db9366e5d8e1321b62ffc3d8a9ea458797/diff:/var/lib/docker/overlay2/98f34e7af675026bd6dd9aaf43b996b5a992e04fe33f238d16d38e7ccad2fb91/diff:/var/lib/docker/overlay2/c1ed97ca4514bd37713c82daa6ba805285f12b5aa51599e9f915b8755fbf55a7/diff:/var/lib/docker/overlay2/86b8ba00550442fc509be298ff07205e99a171d56a70a2e1735cdfd4a9ab745e/diff:/var/lib/docker/overlay2/109092c1424dfee38a4eb63e39a9bb5d3de4b011f84e30bd021bf73ee2f47aa1/diff:/var/lib/docker/overlay2/87fd60ac9e5dd5a701def2c0a9e1c64463bd4779be06a990c86e5561a4349312/diff:/var/lib/docker/overlay2/145545da7fd578da8a41adec4cec3a9b347b4e609c900ee9365ae7735c7859c6/diff:/var/lib/docker/overlay2/e6a20d33de71269404b7fdc1fa6f915dcbc5a337e0625ef4536cb942bb55c19b/diff:/var/lib/docker/overlay2/38829872ccac03d0dae45b93d1f34e61fd179b6fc19f22fd27d4431ee773090f/diff:/var/lib/docker/overlay2/3c7fa7cc3994b36d6b984ebe7c46b535829f02814130486c4ecf5a0976f4f7dc/diff:/var/lib/docker/overlay2/b635e671b4d1409cfd8b2da5e825f1ec95394c
fc12c58befe6073fbb72ba380d/diff:/var/lib/docker/overlay2/1947bd69c7dfab5dd5faf9672bfd7026287734dc23ee3e44777917f2f0a5a94a/diff:/var/lib/docker/overlay2/584a032dd69c42439df99f15e55b3cdf7afb4340f59e1938ce4e32d8f154f45b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9fef875d4577144eb38b6f267e42a49bcc3d090783d046c662abffbc73f8d8f5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20211231103230-6736",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20211231103230-6736/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20211231103230-6736",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20211231103230-6736",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20211231103230-6736",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c51c239834c0a79db122933a66bc297c5e82f8810e0ea189de2970c0af2302b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49432"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49431"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49428"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49430"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49429"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1c51c239834c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20211231103230-6736": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "282fb8467680",
	                        "default-k8s-different-port-20211231103230-6736"
	                    ],
	                    "NetworkID": "e1788769ca7736a71ee22c1f2c56bcd2d9ff496f9d3c2faac492c32b43c45e2f",
	                    "EndpointID": "b9fb2147bd35b928ce091697818409020532d192f5d386f126eee3cf42c8c85a",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20211231103230-6736 -n default-k8s-different-port-20211231103230-6736
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20211231103230-6736 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-different-port-20211231103230-6736 logs -n 25: (1.044189055s)
helpers_test.go:253: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:02 UTC | Fri, 31 Dec 2021 10:39:03 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:05 UTC | Fri, 31 Dec 2021 10:39:06 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:06 UTC | Fri, 31 Dec 2021 10:39:07 UTC |
	|         | old-k8s-version-20211231102602-6736               |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:07 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736               |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:39:28 UTC | Fri, 31 Dec 2021 10:39:28 UTC |
	|         | old-k8s-version-20211231102602-6736               |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:57 UTC | Fri, 31 Dec 2021 10:42:58 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:42:59 UTC | Fri, 31 Dec 2021 10:43:00 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:01 UTC | Fri, 31 Dec 2021 10:43:02 UTC |
	|         | embed-certs-20211231102953-6736                   |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:02 UTC | Fri, 31 Dec 2021 10:43:22 UTC |
	|         | embed-certs-20211231102953-6736                   |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:43:22 UTC | Fri, 31 Dec 2021 10:43:22 UTC |
	|         | embed-certs-20211231102953-6736                   |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:44:18 UTC | Fri, 31 Dec 2021 10:44:19 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:40 UTC | Fri, 31 Dec 2021 10:45:41 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:42 UTC | Fri, 31 Dec 2021 10:45:43 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:44 UTC | Fri, 31 Dec 2021 10:45:45 UTC |
	|         | default-k8s-different-port-20211231103230-6736    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:45:45 UTC | Fri, 31 Dec 2021 10:46:05 UTC |
	|         | default-k8s-different-port-20211231103230-6736    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:46:05 UTC | Fri, 31 Dec 2021 10:46:05 UTC |
	|         | default-k8s-different-port-20211231103230-6736    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:48:11 UTC | Fri, 31 Dec 2021 10:48:12 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:50:55 UTC | Fri, 31 Dec 2021 10:50:56 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:53:21 UTC | Fri, 31 Dec 2021 10:53:22 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:57:14 UTC | Fri, 31 Dec 2021 10:57:15 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | default-k8s-different-port-20211231103230-6736    | default-k8s-different-port-20211231103230-6736 | jenkins | v1.24.0 | Fri, 31 Dec 2021 10:59:58 UTC | Fri, 31 Dec 2021 10:59:59 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| -p      | old-k8s-version-20211231102602-6736               | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 11:02:24 UTC | Fri, 31 Dec 2021 11:02:25 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| delete  | -p                                                | old-k8s-version-20211231102602-6736            | jenkins | v1.24.0 | Fri, 31 Dec 2021 11:02:26 UTC | Fri, 31 Dec 2021 11:02:29 UTC |
	|         | old-k8s-version-20211231102602-6736               |                                                |         |         |                               |                               |
	| -p      | embed-certs-20211231102953-6736                   | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 11:06:17 UTC | Fri, 31 Dec 2021 11:06:18 UTC |
	|         | logs -n 25                                        |                                                |         |         |                               |                               |
	| delete  | -p                                                | embed-certs-20211231102953-6736                | jenkins | v1.24.0 | Fri, 31 Dec 2021 11:06:19 UTC | Fri, 31 Dec 2021 11:06:22 UTC |
	|         | embed-certs-20211231102953-6736                   |                                                |         |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/31 10:46:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.17.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1231 10:46:05.967243  259177 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:46:05.967352  259177 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:46:05.967357  259177 out.go:310] Setting ErrFile to fd 2...
	I1231 10:46:05.967361  259177 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:46:05.967469  259177 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:46:05.967735  259177 out.go:304] Setting JSON to false
	I1231 10:46:05.969104  259177 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5320,"bootTime":1640942245,"procs":269,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:46:05.969197  259177 start.go:122] virtualization: kvm guest
	I1231 10:46:05.973853  259177 out.go:176] * [default-k8s-different-port-20211231103230-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:46:05.974255  259177 notify.go:174] Checking for updates...
	I1231 10:46:05.977868  259177 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:46:05.981223  259177 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:46:05.983862  259177 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:05.986549  259177 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:46:05.989061  259177 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:46:05.990312  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:05.991362  259177 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:46:06.042846  259177 docker.go:132] docker version: linux-20.10.12
	I1231 10:46:06.042944  259177 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:46:06.151292  259177 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:46:06.079978563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:46:06.151415  259177 docker.go:237] overlay module found
	I1231 10:46:06.155844  259177 out.go:176] * Using the docker driver based on existing profile
	I1231 10:46:06.155884  259177 start.go:280] selected driver: docker
	I1231 10:46:06.155890  259177 start.go:795] validating driver "docker" against &{Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6
736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:tru
e default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:06.155998  259177 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:46:06.156009  259177 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:46:06.156018  259177 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:46:06.156049  259177 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:46:06.156072  259177 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:46:06.158635  259177 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:46:06.159406  259177 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:46:06.263549  259177 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:46:06.195249559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	W1231 10:46:06.263707  259177 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:46:06.263738  259177 out.go:241] ! Your cgroup does not allow setting memory.
	I1231 10:46:06.266310  259177 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:46:06.266476  259177 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1231 10:46:06.266517  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:06.266529  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:06.266555  259177 start_flags.go:298] config:
	{Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] Star
tHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:06.269292  259177 out.go:176] * Starting control plane node default-k8s-different-port-20211231103230-6736 in cluster default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.269341  259177 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 10:46:06.271330  259177 out.go:176] * Pulling base image ...
	I1231 10:46:06.271382  259177 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:46:06.271427  259177 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4
	I1231 10:46:06.271440  259177 cache.go:57] Caching tarball of preloaded images
	I1231 10:46:06.271488  259177 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 10:46:06.271687  259177 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1231 10:46:06.271703  259177 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.1 on containerd
	I1231 10:46:06.271838  259177 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/config.json ...
	I1231 10:46:06.311882  259177 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 10:46:06.311924  259177 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 10:46:06.311934  259177 cache.go:206] Successfully downloaded all kic artifacts
	I1231 10:46:06.311964  259177 start.go:313] acquiring machines lock for default-k8s-different-port-20211231103230-6736: {Name:mk03f3b7e941bf8396158e471587c1c8924400ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1231 10:46:06.312088  259177 start.go:317] acquired machines lock for "default-k8s-different-port-20211231103230-6736" in 90µs
	I1231 10:46:06.312120  259177 start.go:93] Skipping create...Using existing machine configuration
	I1231 10:46:06.312127  259177 fix.go:55] fixHost starting: 
	I1231 10:46:06.312392  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:06.348944  259177 fix.go:108] recreateIfNeeded on default-k8s-different-port-20211231103230-6736: state=Stopped err=<nil>
	W1231 10:46:06.348985  259177 fix.go:134] unexpected machine state, will restart: <nil>
	I1231 10:46:04.629364  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:06.630805  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:06.352123  259177 out.go:176] * Restarting existing docker container for "default-k8s-different-port-20211231103230-6736" ...
	I1231 10:46:06.352210  259177 cli_runner.go:133] Run: docker start default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.813658  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:06.855625  259177 kic.go:420] container "default-k8s-different-port-20211231103230-6736" state is running.
	I1231 10:46:06.856072  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.893499  259177 profile.go:147] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/config.json ...
	I1231 10:46:06.893751  259177 machine.go:88] provisioning docker machine ...
	I1231 10:46:06.893775  259177 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:06.893829  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:06.942482  259177 main.go:130] libmachine: Using SSH client type: native
	I1231 10:46:06.942732  259177 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I1231 10:46:06.942761  259177 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20211231103230-6736 && echo "default-k8s-different-port-20211231103230-6736" | sudo tee /etc/hostname
	I1231 10:46:06.944176  259177 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49220->127.0.0.1:49432: read: connection reset by peer
	I1231 10:46:10.090471  259177 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20211231103230-6736
	
	I1231 10:46:10.090566  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.126309  259177 main.go:130] libmachine: Using SSH client type: native
	I1231 10:46:10.126479  259177 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a0400] 0x7a34e0 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I1231 10:46:10.126500  259177 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20211231103230-6736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20211231103230-6736/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20211231103230-6736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1231 10:46:10.265308  259177 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1231 10:46:10.265342  259177 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/c
erts/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube}
	I1231 10:46:10.265381  259177 ubuntu.go:177] setting up certificates
	I1231 10:46:10.265390  259177 provision.go:83] configureAuth start
	I1231 10:46:10.265438  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.304699  259177 provision.go:138] copyHostCerts
	I1231 10:46:10.304774  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem, removing ...
	I1231 10:46:10.304797  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem
	I1231 10:46:10.304931  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.pem (1082 bytes)
	I1231 10:46:10.305111  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem, removing ...
	I1231 10:46:10.305135  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem
	I1231 10:46:10.305188  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cert.pem (1123 bytes)
	I1231 10:46:10.305275  259177 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem, removing ...
	I1231 10:46:10.305288  259177 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem
	I1231 10:46:10.305319  259177 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/key.pem (1675 bytes)
	I1231 10:46:10.305369  259177 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20211231103230-6736 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20211231103230-6736]
	I1231 10:46:10.388915  259177 provision.go:172] copyRemoteCerts
	I1231 10:46:10.388990  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1231 10:46:10.389022  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.428508  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.524630  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1231 10:46:10.546323  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I1231 10:46:10.568067  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1231 10:46:10.589042  259177 provision.go:86] duration metric: configureAuth took 323.641763ms
	I1231 10:46:10.589074  259177 ubuntu.go:193] setting minikube options for container-runtime
	I1231 10:46:10.589309  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:10.589325  259177 machine.go:91] provisioned docker machine in 3.69555799s
	I1231 10:46:10.589336  259177 start.go:267] post-start starting for "default-k8s-different-port-20211231103230-6736" (driver="docker")
	I1231 10:46:10.589345  259177 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1231 10:46:10.589389  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1231 10:46:10.589435  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.626814  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.729130  259177 ssh_runner.go:195] Run: cat /etc/os-release
	I1231 10:46:10.732760  259177 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1231 10:46:10.732791  259177 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1231 10:46:10.732799  259177 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1231 10:46:10.732804  259177 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1231 10:46:10.732813  259177 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/addons for local assets ...
	I1231 10:46:10.732873  259177 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files for local assets ...
	I1231 10:46:10.732955  259177 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem -> 67362.pem in /etc/ssl/certs
	I1231 10:46:10.733065  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1231 10:46:10.741203  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:46:10.767998  259177 start.go:270] post-start completed in 178.644414ms
	I1231 10:46:10.768098  259177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:46:10.768143  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.814628  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:09.129448  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:11.130044  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:10.913687  259177 fix.go:57] fixHost completed within 4.601551226s
	I1231 10:46:10.913730  259177 start.go:80] releasing machines lock for "default-k8s-different-port-20211231103230-6736", held for 4.601625482s
	I1231 10:46:10.913830  259177 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.952465  259177 ssh_runner.go:195] Run: systemctl --version
	I1231 10:46:10.952474  259177 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1231 10:46:10.952512  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.952544  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:10.995326  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:10.996790  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:11.093213  259177 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1231 10:46:11.117553  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1231 10:46:11.129631  259177 docker.go:158] disabling docker service ...
	I1231 10:46:11.129684  259177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1231 10:46:11.141035  259177 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1231 10:46:11.151527  259177 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1231 10:46:11.242410  259177 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1231 10:46:11.332381  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1231 10:46:11.343726  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1231 10:46:11.360330  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I1231 10:46:11.377258  259177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1231 10:46:11.386307  259177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1231 10:46:11.395035  259177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1231 10:46:11.484118  259177 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1231 10:46:11.570327  259177 start.go:437] Will wait 60s for socket path /run/containerd/containerd.sock
	I1231 10:46:11.570400  259177 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1231 10:46:11.575057  259177 start.go:458] Will wait 60s for crictl version
	I1231 10:46:11.575130  259177 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:46:11.606505  259177 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-12-31T10:46:11Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I1231 10:46:13.130219  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:15.629571  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.654252  259177 ssh_runner.go:195] Run: sudo crictl version
	I1231 10:46:22.679099  259177 start.go:467] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.12
	RuntimeApiVersion:  v1alpha2
	I1231 10:46:22.679173  259177 ssh_runner.go:195] Run: containerd --version
	I1231 10:46:22.699642  259177 ssh_runner.go:195] Run: containerd --version
	I1231 10:46:22.724111  259177 out.go:176] * Preparing Kubernetes v1.23.1 on containerd 1.4.12 ...
	I1231 10:46:22.724192  259177 cli_runner.go:133] Run: docker network inspect default-k8s-different-port-20211231103230-6736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1231 10:46:22.761739  259177 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1231 10:46:22.767723  259177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:46:18.129788  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:20.629221  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.629685  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:22.782449  259177 out.go:176]   - kubelet.global-housekeeping-interval=60m
	I1231 10:46:22.785523  259177 out.go:176]   - kubelet.housekeeping-interval=5m
	I1231 10:46:22.788150  259177 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I1231 10:46:22.788429  259177 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 10:46:22.788579  259177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:46:22.817871  259177 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:46:22.817900  259177 containerd.go:526] Images already preloaded, skipping extraction
	I1231 10:46:22.817954  259177 ssh_runner.go:195] Run: sudo crictl images --output json
	I1231 10:46:22.845121  259177 containerd.go:612] all images are preloaded for containerd runtime.
	I1231 10:46:22.845143  259177 cache_images.go:84] Images are preloaded, skipping loading
	I1231 10:46:22.845184  259177 ssh_runner.go:195] Run: sudo crictl info
	I1231 10:46:22.872122  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:22.872151  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:22.872177  259177 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1231 10:46:22.872197  259177 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.23.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20211231103230-6736 NodeName:default-k8s-different-port-20211231103230-6736 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2
CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1231 10:46:22.872494  259177 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20211231103230-6736"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1231 10:46:22.872594  259177 kubeadm.go:788] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --global-housekeeping-interval=60m --hostname-override=default-k8s-different-port-20211231103230-6736 --housekeeping-interval=5m --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1231 10:46:22.872650  259177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.1
	I1231 10:46:22.880952  259177 binaries.go:44] Found k8s binaries, skipping transfer
	I1231 10:46:22.881023  259177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1231 10:46:22.889603  259177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (653 bytes)
	I1231 10:46:22.904645  259177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1231 10:46:22.919901  259177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1231 10:46:22.934205  259177 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1231 10:46:22.937824  259177 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1231 10:46:22.948914  259177 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736 for IP: 192.168.67.2
	I1231 10:46:22.949067  259177 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key
	I1231 10:46:22.949116  259177 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key
	I1231 10:46:22.949182  259177 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/client.key
	I1231 10:46:22.949259  259177 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key.c7fa3a9e
	I1231 10:46:22.949302  259177 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.key
	I1231 10:46:22.949461  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem (1338 bytes)
	W1231 10:46:22.949511  259177 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736_empty.pem, impossibly tiny 0 bytes
	I1231 10:46:22.949519  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca-key.pem (1679 bytes)
	I1231 10:46:22.949548  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/ca.pem (1082 bytes)
	I1231 10:46:22.949577  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/cert.pem (1123 bytes)
	I1231 10:46:22.949600  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/key.pem (1675 bytes)
	I1231 10:46:22.949638  259177 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem (1708 bytes)
	I1231 10:46:22.950592  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1231 10:46:22.971400  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1231 10:46:22.991752  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1231 10:46:23.013577  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/default-k8s-different-port-20211231103230-6736/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1231 10:46:23.034846  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1231 10:46:23.055398  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1231 10:46:23.077052  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1231 10:46:23.096743  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1231 10:46:23.116982  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1231 10:46:23.138029  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/certs/6736.pem --> /usr/share/ca-certificates/6736.pem (1338 bytes)
	I1231 10:46:23.158888  259177 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/ssl/certs/67362.pem --> /usr/share/ca-certificates/67362.pem (1708 bytes)
	I1231 10:46:23.180681  259177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1231 10:46:23.196969  259177 ssh_runner.go:195] Run: openssl version
	I1231 10:46:23.202414  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6736.pem && ln -fs /usr/share/ca-certificates/6736.pem /etc/ssl/certs/6736.pem"
	I1231 10:46:23.211307  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.215577  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 31 09:47 /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.215648  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6736.pem
	I1231 10:46:23.221379  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6736.pem /etc/ssl/certs/51391683.0"
	I1231 10:46:23.230004  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/67362.pem && ln -fs /usr/share/ca-certificates/67362.pem /etc/ssl/certs/67362.pem"
	I1231 10:46:23.240407  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.245087  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 31 09:47 /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.245149  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/67362.pem
	I1231 10:46:23.252376  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/67362.pem /etc/ssl/certs/3ec20f2e.0"
	I1231 10:46:23.261273  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1231 10:46:23.272187  259177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.276381  259177 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 31 09:42 /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.276439  259177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1231 10:46:23.282440  259177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1231 10:46:23.291931  259177 kubeadm.go:388] StartCluster: {Name:default-k8s-different-port-20211231103230-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:default-k8s-different-port-20211231103230-6736 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.3.1@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true ex
tra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 10:46:23.292071  259177 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1231 10:46:23.292143  259177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1231 10:46:23.322595  259177 cri.go:87] found id: "2831ff3abf5d3b7c3004a71a90c5a249bd29a86abeeac5faa59d99cd8ba82d29"
	I1231 10:46:23.322627  259177 cri.go:87] found id: "bd73d75d2e911560016e85d3108966fc37ee8b4b3657740aad59a2ef691ec869"
	I1231 10:46:23.322635  259177 cri.go:87] found id: "6f1fab877ff5d05fc062729318a8cb0acd49484779378aa54df1414cc54b604e"
	I1231 10:46:23.322641  259177 cri.go:87] found id: "631b3be24dd2b7667fa37162bef86df2c43e5cf7b5aac17096ee8f79bf749549"
	I1231 10:46:23.322646  259177 cri.go:87] found id: "a578cbf12a8e43444eec092845ea14c9a2f682965d287a9801a49dd0a7a8f11f"
	I1231 10:46:23.322652  259177 cri.go:87] found id: "78f1ab230e90175601afd355c7e642dc2f7730a2c82d7591eb54c79c0cd8fdcb"
	I1231 10:46:23.322658  259177 cri.go:87] found id: ""
	I1231 10:46:23.322704  259177 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1231 10:46:23.340383  259177 cri.go:114] JSON = null
	W1231 10:46:23.340448  259177 kubeadm.go:395] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I1231 10:46:23.340504  259177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1231 10:46:23.349236  259177 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1231 10:46:23.356926  259177 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.357908  259177 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20211231103230-6736" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:23.358387  259177 kubeconfig.go:127] "default-k8s-different-port-20211231103230-6736" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig - will repair!
	I1231 10:46:23.359199  259177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:23.361565  259177 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1231 10:46:23.369806  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.369877  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.387797  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.588254  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.588346  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.603506  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.788716  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.788802  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:23.805281  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:23.988608  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:23.988691  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.003485  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.188825  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.188911  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.204147  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.388346  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.388445  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.405511  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.587900  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.587979  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.603393  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.788613  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.788703  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:24.804658  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.988878  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:24.988986  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.004053  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.188400  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.188499  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.203668  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.388884  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.388958  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.404633  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.588928  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.588999  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.603675  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:25.787949  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.788030  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:25.803421  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:24.630080  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:27.131198  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:25.988170  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:25.988259  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.003489  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.188799  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.188874  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.206241  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.388554  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.388631  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.404946  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1231 10:46:26.404976  259177 api_server.go:165] Checking apiserver status ...
	I1231 10:46:26.405015  259177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1231 10:46:26.421640  259177 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W1231 10:46:26.421666  259177 kubeadm.go:600] needs reconfigure: apiserver error: timed out waiting for the condition
	I1231 10:46:26.421696  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1231 10:46:27.164679  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:46:27.176052  259177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1231 10:46:27.184966  259177 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I1231 10:46:27.185023  259177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1231 10:46:27.192809  259177 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1231 10:46:27.192859  259177 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1231 10:46:29.629674  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:31.629826  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:33.630236  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:35.630628  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:41.573231  259177 out.go:203]   - Generating certificates and keys ...
	I1231 10:46:41.577004  259177 out.go:203]   - Booting up control plane ...
	I1231 10:46:41.580458  259177 out.go:203]   - Configuring RBAC rules ...
	I1231 10:46:41.582339  259177 cni.go:93] Creating CNI manager for ""
	I1231 10:46:41.582359  259177 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 10:46:38.131557  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:40.628931  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:42.629291  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:41.584853  259177 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1231 10:46:41.584972  259177 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1231 10:46:41.589132  259177 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.1/kubectl ...
	I1231 10:46:41.589160  259177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1231 10:46:41.603970  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1231 10:46:42.303042  259177 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1231 10:46:42.303113  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.303143  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb minikube.k8s.io/name=default-k8s-different-port-20211231103230-6736 minikube.k8s.io/updated_at=2021_12_31T10_46_42_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.323470  259177 ops.go:34] apiserver oom_adj: -16
	I1231 10:46:42.425794  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:42.997478  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:43.497994  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:43.997336  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.497461  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.997469  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:45.497141  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:44.629588  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:46.630233  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:45.996911  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:46.497554  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:46.996878  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:47.497472  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:47.997537  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:48.496954  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:48.997559  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.497163  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.997251  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:50.496922  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:49.129073  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:51.129976  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:50.997663  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:51.497167  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:51.996864  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:52.497527  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:52.997090  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:53.497799  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:53.997130  259177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1231 10:46:54.105111  259177 kubeadm.go:864] duration metric: took 11.802054034s to wait for elevateKubeSystemPrivileges.
	I1231 10:46:54.105148  259177 kubeadm.go:390] StartCluster complete in 30.81326231s
	I1231 10:46:54.105171  259177 settings.go:142] acquiring lock: {Name:mk9a67300ef15539fef730cd5754a9b7a8d3b037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:54.105289  259177 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:46:54.108579  259177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig: {Name:mk2fc3533e33e8cae2448f53a339a72544f7031f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1231 10:46:54.630591  259177 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20211231103230-6736" rescaled to 1
	I1231 10:46:54.630781  259177 start.go:206] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
	I1231 10:46:54.634169  259177 out.go:176] * Verifying Kubernetes components...
	I1231 10:46:54.631217  259177 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I1231 10:46:54.634291  259177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:46:54.634321  259177 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634357  259177 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634365  259177 addons.go:165] addon storage-provisioner should already be in state true
	I1231 10:46:54.634362  259177 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.631547  259177 config.go:176] Loaded profile config "default-k8s-different-port-20211231103230-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:46:54.631838  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1231 10:46:54.634371  259177 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634395  259177 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634410  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634414  259177 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634427  259177 addons.go:165] addon metrics-server should already be in state true
	I1231 10:46:54.634489  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634516  259177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:54.634487  259177 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.634558  259177 addons.go:165] addon dashboard should already be in state true
	I1231 10:46:54.634593  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.634970  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635065  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635066  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.635106  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.685830  259177 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20211231103230-6736" to be "Ready" ...
	I1231 10:46:54.699606  259177 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20211231103230-6736"
	W1231 10:46:54.699638  259177 addons.go:165] addon default-storageclass should already be in state true
	I1231 10:46:54.699668  259177 host.go:66] Checking if "default-k8s-different-port-20211231103230-6736" exists ...
	I1231 10:46:54.700169  259177 cli_runner.go:133] Run: docker container inspect default-k8s-different-port-20211231103230-6736 --format={{.State.Status}}
	I1231 10:46:54.704286  259177 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1231 10:46:54.704439  259177 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:46:54.707989  259177 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1231 10:46:54.704464  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1231 10:46:54.708151  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.708200  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1231 10:46:54.708461  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I1231 10:46:54.708547  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.719755  259177 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
	I1231 10:46:54.722183  259177 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I1231 10:46:54.722299  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1231 10:46:54.722320  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1231 10:46:54.722396  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.765928  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.766391  259177 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1231 10:46:54.766411  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1231 10:46:54.766457  259177 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20211231103230-6736
	I1231 10:46:54.767368  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.784353  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:54.795209  259177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1231 10:46:54.825887  259177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/default-k8s-different-port-20211231103230-6736/id_rsa Username:docker}
	I1231 10:46:55.079913  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1231 10:46:55.079940  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I1231 10:46:55.081041  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1231 10:46:55.101407  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1231 10:46:55.101636  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1231 10:46:55.101669  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1231 10:46:55.182759  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1231 10:46:55.182791  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I1231 10:46:55.192507  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1231 10:46:55.192539  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1231 10:46:55.210370  259177 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:46:55.210394  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I1231 10:46:55.280215  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1231 10:46:55.280281  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1231 10:46:55.292923  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1231 10:46:55.303427  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1231 10:46:55.303453  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I1231 10:46:55.390564  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1231 10:46:55.390603  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1231 10:46:55.488578  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1231 10:46:55.488607  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1231 10:46:55.493706  259177 start.go:773] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I1231 10:46:55.509144  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1231 10:46:55.509179  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1231 10:46:55.591504  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1231 10:46:55.591541  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1231 10:46:55.611957  259177 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:46:55.611987  259177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1231 10:46:55.692647  259177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1231 10:46:56.280527  259177 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20211231103230-6736"
	I1231 10:46:56.694434  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:46:57.215361  259177 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.522655734s)
	I1231 10:46:53.130645  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:55.628936  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:57.630008  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:46:57.218195  259177 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1231 10:46:57.218237  259177 addons.go:417] enableAddons completed in 2.587057511s
	I1231 10:46:59.194381  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:00.129798  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:02.629613  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:01.195147  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:03.195279  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:05.694625  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:05.129931  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:07.629077  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:08.193944  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:10.195307  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:09.629227  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:12.129630  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:12.695675  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:15.195343  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:14.629484  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:17.129904  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:17.693570  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:19.693815  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:19.629766  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:21.630502  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:21.694204  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:23.694541  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:25.694753  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:24.130302  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:26.629329  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:28.193820  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:30.694197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:28.630092  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:30.630190  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:32.694753  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:35.194657  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:33.129099  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:35.129944  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:37.629071  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:37.694883  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:40.194845  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:39.629460  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:41.629639  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:42.694778  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:45.194603  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:44.129618  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:46.628987  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:47.693566  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:49.694322  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:48.629810  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:51.129720  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:52.194896  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:54.694088  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:53.629970  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:55.630008  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:47:56.694160  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:58.694485  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:00.695020  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:47:58.129386  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:00.629184  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:02.630921  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:03.194853  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:05.195360  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:05.129367  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:07.629362  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:07.197249  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:09.693805  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:09.629727  253675 node_ready.go:58] node "embed-certs-20211231102953-6736" has status "Ready":"False"
	I1231 10:48:11.132106  253675 node_ready.go:38] duration metric: took 4m0.023004433s waiting for node "embed-certs-20211231102953-6736" to be "Ready" ...
	I1231 10:48:11.135160  253675 out.go:176] 
	W1231 10:48:11.135314  253675 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:48:11.135327  253675 out.go:241] * 
	W1231 10:48:11.136204  253675 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1231 10:48:11.694445  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:13.694502  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:15.695650  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:18.193879  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:20.194982  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:22.694025  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:24.695099  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:26.695542  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:29.193781  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:31.196683  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:33.693756  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:35.694396  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:37.694612  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:40.194187  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:42.194624  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:44.694809  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:47.194396  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:49.194998  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:51.693920  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:54.194059  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:56.194946  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:48:58.694370  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:01.194666  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:03.693794  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:05.693979  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:07.694832  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:10.193727  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:12.196818  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:14.694785  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:17.194197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:19.195220  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:21.693956  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:24.193839  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:26.194434  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:28.694344  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:31.194654  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:33.694374  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:36.194938  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:38.693771  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:40.694132  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:42.694270  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:44.694407  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:47.194463  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:49.194582  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:51.694082  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:53.694472  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:56.194226  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:49:58.194871  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:00.694067  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:02.694491  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:04.695370  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:07.193813  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:09.194857  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:11.693897  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:13.694405  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:15.695280  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:18.194197  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:20.694632  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:23.193998  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:25.194494  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:27.194919  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:29.693725  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:31.694052  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:34.194397  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:36.695239  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:39.194905  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:41.693646  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:43.694494  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:46.193735  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:48.194245  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:50.694846  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:53.194883  259177 node_ready.go:58] node "default-k8s-different-port-20211231103230-6736" has status "Ready":"False"
	I1231 10:50:54.696282  259177 node_ready.go:38] duration metric: took 4m0.010315548s waiting for node "default-k8s-different-port-20211231103230-6736" to be "Ready" ...
	I1231 10:50:54.698863  259177 out.go:176] 
	W1231 10:50:54.699054  259177 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1231 10:50:54.699075  259177 out.go:241] * 
	W1231 10:50:54.699945  259177 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	857796907c094       6de166512aa22       5 minutes ago       Exited              kindnet-cni               8                   5b9b47832c3c9
	6de5e38677b20       b46c42588d511       22 minutes ago      Running             kube-proxy                0                   e57926d0ab3c5
	48225c99a0965       25f8c7f3da61c       22 minutes ago      Running             etcd                      1                   0eff770ba2d39
	be441fc987f41       b6d7abedde399       22 minutes ago      Running             kube-apiserver            1                   9be9eca9d95fc
	d693c50da8741       f51846a4fd288       22 minutes ago      Running             kube-controller-manager   1                   9c10142d32214
	d05e688d9162e       71d575efe6283       22 minutes ago      Running             kube-scheduler            1                   b0d0ced0300d8
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-12-31 10:46:07 UTC, end at Fri 2021-12-31 11:09:01 UTC. --
	Dec 31 10:53:32 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:53:32.320000250Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:53:32 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:53:32.593910341Z" level=info msg="RemoveContainer for \"b452495aa7e452cf74da07fd57f9f73f18d096657904345825605e9beb3f73ff\""
	Dec 31 10:53:32 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:53:32.599907499Z" level=info msg="RemoveContainer for \"b452495aa7e452cf74da07fd57f9f73f18d096657904345825605e9beb3f73ff\" returns successfully"
	Dec 31 10:58:44 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:58:44.714323981Z" level=info msg="CreateContainer within sandbox \"5b9b47832c3c9608978186904d83608afaba148d142bf3d9b486c8d8966dbbdf\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:7,}"
	Dec 31 10:58:44 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:58:44.749622770Z" level=info msg="CreateContainer within sandbox \"5b9b47832c3c9608978186904d83608afaba148d142bf3d9b486c8d8966dbbdf\" for &ContainerMetadata{Name:kindnet-cni,Attempt:7,} returns container id \"9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3\""
	Dec 31 10:58:44 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:58:44.750390208Z" level=info msg="StartContainer for \"9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3\""
	Dec 31 10:58:44 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:58:44.903686780Z" level=info msg="StartContainer for \"9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3\" returns successfully"
	Dec 31 10:58:55 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:58:55.290062048Z" level=info msg="Finish piping stderr of container \"9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3\""
	Dec 31 10:58:55 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:58:55.290078473Z" level=info msg="Finish piping stdout of container \"9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3\""
	Dec 31 10:58:55 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:58:55.291015411Z" level=info msg="TaskExit event &TaskExit{ContainerID:9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3,ID:9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3,Pid:2958,ExitStatus:2,ExitedAt:2021-12-31 10:58:55.290734411 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 10:58:55 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:58:55.322065104Z" level=info msg="shim disconnected" id=9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3
	Dec 31 10:58:55 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:58:55.322190962Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 10:58:56 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:58:56.193168675Z" level=info msg="RemoveContainer for \"34717ae14d3de8130a37049a5b9b465105ebf94cad4065da69609f00d6b8b5fc\""
	Dec 31 10:58:56 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T10:58:56.199377975Z" level=info msg="RemoveContainer for \"34717ae14d3de8130a37049a5b9b465105ebf94cad4065da69609f00d6b8b5fc\" returns successfully"
	Dec 31 11:03:59 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T11:03:59.712985708Z" level=info msg="CreateContainer within sandbox \"5b9b47832c3c9608978186904d83608afaba148d142bf3d9b486c8d8966dbbdf\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:8,}"
	Dec 31 11:03:59 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T11:03:59.739373328Z" level=info msg="CreateContainer within sandbox \"5b9b47832c3c9608978186904d83608afaba148d142bf3d9b486c8d8966dbbdf\" for &ContainerMetadata{Name:kindnet-cni,Attempt:8,} returns container id \"857796907c094a6557f65b90a6cea17809bcabeaaa11d4b8200f0d8654252f3e\""
	Dec 31 11:03:59 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T11:03:59.740019896Z" level=info msg="StartContainer for \"857796907c094a6557f65b90a6cea17809bcabeaaa11d4b8200f0d8654252f3e\""
	Dec 31 11:03:59 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T11:03:59.987702447Z" level=info msg="StartContainer for \"857796907c094a6557f65b90a6cea17809bcabeaaa11d4b8200f0d8654252f3e\" returns successfully"
	Dec 31 11:04:10 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T11:04:10.203308731Z" level=info msg="Finish piping stdout of container \"857796907c094a6557f65b90a6cea17809bcabeaaa11d4b8200f0d8654252f3e\""
	Dec 31 11:04:10 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T11:04:10.203373928Z" level=info msg="Finish piping stderr of container \"857796907c094a6557f65b90a6cea17809bcabeaaa11d4b8200f0d8654252f3e\""
	Dec 31 11:04:10 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T11:04:10.204709148Z" level=info msg="TaskExit event &TaskExit{ContainerID:857796907c094a6557f65b90a6cea17809bcabeaaa11d4b8200f0d8654252f3e,ID:857796907c094a6557f65b90a6cea17809bcabeaaa11d4b8200f0d8654252f3e,Pid:3306,ExitStatus:2,ExitedAt:2021-12-31 11:04:10.204224042 +0000 UTC,XXX_unrecognized:[],}"
	Dec 31 11:04:10 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T11:04:10.229838445Z" level=info msg="shim disconnected" id=857796907c094a6557f65b90a6cea17809bcabeaaa11d4b8200f0d8654252f3e
	Dec 31 11:04:10 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T11:04:10.229937205Z" level=error msg="copy shim log" error="read /proc/self/fd/88: file already closed"
	Dec 31 11:04:10 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T11:04:10.775736813Z" level=info msg="RemoveContainer for \"9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3\""
	Dec 31 11:04:10 default-k8s-different-port-20211231103230-6736 containerd[342]: time="2021-12-31T11:04:10.781655423Z" level=info msg="RemoveContainer for \"9329ce01dc0e578bb9e380f8c37e436d9351e3c9e7b681971e4b34e68d24f8c3\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20211231103230-6736
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20211231103230-6736
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a81813405fc4902e1caabd8d69223ec230d201fb
	                    minikube.k8s.io/name=default-k8s-different-port-20211231103230-6736
	                    minikube.k8s.io/updated_at=2021_12_31T10_46_42_0700
	                    minikube.k8s.io/version=v1.24.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 Dec 2021 10:46:38 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20211231103230-6736
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 Dec 2021 11:08:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 Dec 2021 11:07:11 +0000   Fri, 31 Dec 2021 10:46:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 Dec 2021 11:07:11 +0000   Fri, 31 Dec 2021 10:46:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 Dec 2021 11:07:11 +0000   Fri, 31 Dec 2021 10:46:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 31 Dec 2021 11:07:11 +0000   Fri, 31 Dec 2021 10:46:35 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    default-k8s-different-port-20211231103230-6736
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32879780Ki
	  pods:               110
	System Info:
	  Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	  System UUID:                60ec9bed-9ff2-4db1-b438-2738c19f5f1f
	  Boot ID:                    e6f4bca3-a61e-4e63-a68d-0c73a5c2c999
	  Kernel Version:             5.11.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.12
	  Kubelet Version:            v1.23.1
	  Kube-Proxy Version:         v1.23.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20211231103230-6736                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kindnet-5x2g8                                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      22m
	  kube-system                 kube-apiserver-default-k8s-different-port-20211231103230-6736             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-default-k8s-different-port-20211231103230-6736    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-8f86l                                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-default-k8s-different-port-20211231103230-6736             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 22m                kube-proxy  
	  Normal  NodeHasSufficientMemory  22m (x5 over 22m)  kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x5 over 22m)  kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x5 over 22m)  kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientPID
	  Normal  Starting                 22m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m                kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet     Node default-k8s-different-port-20211231103230-6736 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 da bc c6 d3 d8 08 06
	[  +0.905080] IPv4: martian source 10.244.0.4 from 10.244.0.4, on dev vetha337dd59
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 d5 c5 d8 9b b7 08 06
	[  +0.767220] IPv4: martian source 10.244.0.5 from 10.244.0.5, on dev vethad0df3e8
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 45 74 46 b2 ed 08 06
	[  +0.175235] IPv4: martian source 10.244.0.6 from 10.244.0.6, on dev vethfa30cc3c
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 f5 8d 27 39 52 08 06
	[  +2.267937] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.022406] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.024047] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.951751] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.011878] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023951] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +2.963808] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.003832] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	[  +1.023918] IPv4: martian source 10.244.0.134 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 d8 77 e7 5a 7d 08 06
	
	* 
	* ==> etcd [48225c99a09655e9f407ab2f3c22787aa4836d3b837422f8080fb6cb20c5e755] <==
	* {"level":"info","ts":"2021-12-31T10:46:34.981Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-12-31T10:46:35.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2021-12-31T10:46:35.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2021-12-31T10:46:35.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2021-12-31T10:46:35.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2021-12-31T10:46:35.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2021-12-31T10:46:35.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2021-12-31T10:46:35.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2021-12-31T10:46:35.700Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:46:35.700Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:default-k8s-different-port-20211231103230-6736 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2021-12-31T10:46:35.700Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:46:35.700Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-31T10:46:35.700Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-12-31T10:46:35.700Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-12-31T10:46:35.701Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:46:35.701Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:46:35.701Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-31T10:46:35.701Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-12-31T10:46:35.702Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2021-12-31T10:56:35.719Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":705}
	{"level":"info","ts":"2021-12-31T10:56:35.720Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":705,"took":"814.329µs"}
	{"level":"info","ts":"2021-12-31T11:01:35.726Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":796}
	{"level":"info","ts":"2021-12-31T11:01:35.727Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":796,"took":"390.901µs"}
	{"level":"info","ts":"2021-12-31T11:06:35.733Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":893}
	{"level":"info","ts":"2021-12-31T11:06:35.733Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":893,"took":"362.797µs"}
	
	* 
	* ==> kernel <==
	*  11:09:02 up  1:51,  0 users,  load average: 0.52, 0.54, 0.79
	Linux default-k8s-different-port-20211231103230-6736 5.11.0-1023-gcp #25~20.04.1-Ubuntu SMP Mon Nov 15 15:54:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [be441fc987f419ba5ae606c3dc7e79411ef7643081ffec1e0932415a1faec812] <==
	* I1231 10:57:38.994437       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 10:59:38.994992       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 10:59:38.995135       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 10:59:38.995145       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 11:01:39.000879       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 11:01:39.000969       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 11:01:39.000977       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 11:02:39.001755       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 11:02:39.001841       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 11:02:39.001849       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 11:04:39.002322       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 11:04:39.002412       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 11:04:39.002421       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 11:06:39.007483       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 11:06:39.007565       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 11:06:39.007572       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1231 11:07:39.007812       1 handler_proxy.go:104] no RequestInfo found in the context
	E1231 11:07:39.007898       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1231 11:07:39.007910       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [d693c50da8741c5da85b83ff32c1c5fd9e21b99ab0afa5b593e83551ee247dd9] <==
	* W1231 11:02:54.141534       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:03:23.736974       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:03:24.156359       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:03:53.751039       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:03:54.175416       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:04:23.760693       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:04:24.192938       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:04:53.775280       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:04:54.209453       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:05:23.798444       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:05:24.228589       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:05:53.819926       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:05:54.245574       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:06:23.828847       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:06:24.262279       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:06:53.848157       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:06:54.278672       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:07:23.870945       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:07:24.295954       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:07:53.889349       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:07:54.316185       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:08:23.905273       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:08:24.336689       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1231 11:08:53.921325       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1231 11:08:54.353633       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [6de5e38677b20bde5bb55f835d713191bc7054d75f34642dd9f38b9df161d628] <==
	* I1231 10:46:54.503302       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I1231 10:46:54.503392       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I1231 10:46:54.503442       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1231 10:46:54.603822       1 server_others.go:206] "Using iptables Proxier"
	I1231 10:46:54.603869       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1231 10:46:54.603877       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1231 10:46:54.603895       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1231 10:46:54.604363       1 server.go:656] "Version info" version="v1.23.1"
	I1231 10:46:54.604971       1 config.go:226] "Starting endpoint slice config controller"
	I1231 10:46:54.604993       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1231 10:46:54.605095       1 config.go:317] "Starting service config controller"
	I1231 10:46:54.605109       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1231 10:46:54.705619       1 shared_informer.go:247] Caches are synced for service config 
	I1231 10:46:54.705653       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [d05e688d9162e6301a11f7be3af706989ab36e552895ab4ab8adeecc620fc5d7] <==
	* W1231 10:46:38.096166       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1231 10:46:38.096271       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1231 10:46:38.096104       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1231 10:46:38.096287       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1231 10:46:38.095998       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1231 10:46:38.096303       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1231 10:46:38.950364       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1231 10:46:38.950693       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1231 10:46:38.964806       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1231 10:46:38.964848       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1231 10:46:39.082347       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1231 10:46:39.082428       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1231 10:46:39.127797       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1231 10:46:39.127849       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1231 10:46:39.179308       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1231 10:46:39.179367       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1231 10:46:39.201670       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1231 10:46:39.201705       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1231 10:46:39.233921       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1231 10:46:39.233975       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1231 10:46:39.285801       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1231 10:46:39.285843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1231 10:46:39.306180       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1231 10:46:39.306224       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1231 10:46:41.082427       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-12-31 10:46:07 UTC, end at Fri 2021-12-31 11:09:02 UTC. --
	Dec 31 11:07:52 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 11:07:52.711459    1372 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-5x2g8_kube-system(48d54ddb-3cf9-429c-9cc4-7ee13b833262)\"" pod="kube-system/kindnet-5x2g8" podUID=48d54ddb-3cf9-429c-9cc4-7ee13b833262
	Dec 31 11:07:57 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 11:07:57.203550    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:08:02 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 11:08:02.204673    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:08:03 default-k8s-different-port-20211231103230-6736 kubelet[1372]: I1231 11:08:03.710519    1372 scope.go:110] "RemoveContainer" containerID="857796907c094a6557f65b90a6cea17809bcabeaaa11d4b8200f0d8654252f3e"
	Dec 31 11:08:03 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 11:08:03.710854    1372 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-5x2g8_kube-system(48d54ddb-3cf9-429c-9cc4-7ee13b833262)\"" pod="kube-system/kindnet-5x2g8" podUID=48d54ddb-3cf9-429c-9cc4-7ee13b833262
	Dec 31 11:08:07 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 11:08:07.206084    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:08:12 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 11:08:12.207480    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:08:14 default-k8s-different-port-20211231103230-6736 kubelet[1372]: I1231 11:08:14.711627    1372 scope.go:110] "RemoveContainer" containerID="857796907c094a6557f65b90a6cea17809bcabeaaa11d4b8200f0d8654252f3e"
	Dec 31 11:08:14 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 11:08:14.712072    1372 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-5x2g8_kube-system(48d54ddb-3cf9-429c-9cc4-7ee13b833262)\"" pod="kube-system/kindnet-5x2g8" podUID=48d54ddb-3cf9-429c-9cc4-7ee13b833262
	Dec 31 11:08:17 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 11:08:17.208453    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:08:22 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 11:08:22.209495    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:08:26 default-k8s-different-port-20211231103230-6736 kubelet[1372]: I1231 11:08:26.711145    1372 scope.go:110] "RemoveContainer" containerID="857796907c094a6557f65b90a6cea17809bcabeaaa11d4b8200f0d8654252f3e"
	Dec 31 11:08:26 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 11:08:26.711460    1372 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-5x2g8_kube-system(48d54ddb-3cf9-429c-9cc4-7ee13b833262)\"" pod="kube-system/kindnet-5x2g8" podUID=48d54ddb-3cf9-429c-9cc4-7ee13b833262
	Dec 31 11:08:27 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 11:08:27.210408    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:08:32 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 11:08:32.211274    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:08:37 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 11:08:37.212828    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:08:41 default-k8s-different-port-20211231103230-6736 kubelet[1372]: I1231 11:08:41.711342    1372 scope.go:110] "RemoveContainer" containerID="857796907c094a6557f65b90a6cea17809bcabeaaa11d4b8200f0d8654252f3e"
	Dec 31 11:08:41 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 11:08:41.711674    1372 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-5x2g8_kube-system(48d54ddb-3cf9-429c-9cc4-7ee13b833262)\"" pod="kube-system/kindnet-5x2g8" podUID=48d54ddb-3cf9-429c-9cc4-7ee13b833262
	Dec 31 11:08:42 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 11:08:42.213994    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:08:47 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 11:08:47.215494    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:08:52 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 11:08:52.217102    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:08:53 default-k8s-different-port-20211231103230-6736 kubelet[1372]: I1231 11:08:53.711006    1372 scope.go:110] "RemoveContainer" containerID="857796907c094a6557f65b90a6cea17809bcabeaaa11d4b8200f0d8654252f3e"
	Dec 31 11:08:53 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 11:08:53.711296    1372 pod_workers.go:918] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-5x2g8_kube-system(48d54ddb-3cf9-429c-9cc4-7ee13b833262)\"" pod="kube-system/kindnet-5x2g8" podUID=48d54ddb-3cf9-429c-9cc4-7ee13b833262
	Dec 31 11:08:57 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 11:08:57.218470    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Dec 31 11:09:02 default-k8s-different-port-20211231103230-6736 kubelet[1372]: E1231 11:09:02.220167    1372 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20211231103230-6736 -n default-k8s-different-port-20211231103230-6736
helpers_test.go:262: (dbg) Run:  kubectl --context default-k8s-different-port-20211231103230-6736 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-64897985d-xx94d metrics-server-7f49dcbd7-gj6bf storage-provisioner dashboard-metrics-scraper-56974995fc-b7pvv kubernetes-dashboard-ccd587f44-gqg75
helpers_test.go:273: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20211231103230-6736 describe pod coredns-64897985d-xx94d metrics-server-7f49dcbd7-gj6bf storage-provisioner dashboard-metrics-scraper-56974995fc-b7pvv kubernetes-dashboard-ccd587f44-gqg75
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20211231103230-6736 describe pod coredns-64897985d-xx94d metrics-server-7f49dcbd7-gj6bf storage-provisioner dashboard-metrics-scraper-56974995fc-b7pvv kubernetes-dashboard-ccd587f44-gqg75: exit status 1 (90.806033ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-xx94d" not found
	Error from server (NotFound): pods "metrics-server-7f49dcbd7-gj6bf" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-b7pvv" not found
	Error from server (NotFound): pods "kubernetes-dashboard-ccd587f44-gqg75" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context default-k8s-different-port-20211231103230-6736 describe pod coredns-64897985d-xx94d metrics-server-7f49dcbd7-gj6bf storage-provisioner dashboard-metrics-scraper-56974995fc-b7pvv kubernetes-dashboard-ccd587f44-gqg75: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (542.80s)

                                                
                                    

Test pass (222/266)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 6.21
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.23.1/json-events 5.64
11 TestDownloadOnly/v1.23.1/preload-exists 0
15 TestDownloadOnly/v1.23.1/LogsDuration 0.08
17 TestDownloadOnly/v1.23.2-rc.0/json-events 10.83
18 TestDownloadOnly/v1.23.2-rc.0/preload-exists 0
22 TestDownloadOnly/v1.23.2-rc.0/LogsDuration 0.09
23 TestDownloadOnly/DeleteAll 0.39
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.22
25 TestDownloadOnlyKic 11.77
26 TestOffline 101.5
28 TestAddons/Setup 144.92
30 TestAddons/parallel/Registry 12.74
31 TestAddons/parallel/Ingress 45.72
32 TestAddons/parallel/MetricsServer 5.74
33 TestAddons/parallel/HelmTiller 8.13
35 TestAddons/parallel/CSI 48.87
37 TestAddons/serial/GCPAuth 40.64
38 TestAddons/StoppedEnableDisable 20.58
39 TestCertOptions 61.6
40 TestCertExpiration 259.27
42 TestForceSystemdFlag 265.79
43 TestForceSystemdEnv 75.5
44 TestKVMDriverInstallOrUpdate 1.95
48 TestErrorSpam/setup 42.96
49 TestErrorSpam/start 1.04
50 TestErrorSpam/status 1.31
51 TestErrorSpam/pause 1.93
52 TestErrorSpam/unpause 1.75
53 TestErrorSpam/stop 14.92
56 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/StartWithProxy 60.34
58 TestFunctional/serial/AuditLog 0
59 TestFunctional/serial/SoftStart 15.89
60 TestFunctional/serial/KubeContext 0.04
61 TestFunctional/serial/KubectlGetPods 0.23
64 TestFunctional/serial/CacheCmd/cache/add_remote 3.58
65 TestFunctional/serial/CacheCmd/cache/add_local 1.14
66 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
67 TestFunctional/serial/CacheCmd/cache/list 0.07
68 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.39
69 TestFunctional/serial/CacheCmd/cache/cache_reload 2.55
70 TestFunctional/serial/CacheCmd/cache/delete 0.16
71 TestFunctional/serial/MinikubeKubectlCmd 0.14
72 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
73 TestFunctional/serial/ExtraConfig 52.86
74 TestFunctional/serial/ComponentHealth 0.06
75 TestFunctional/serial/LogsCmd 1.08
76 TestFunctional/serial/LogsFileCmd 1.07
78 TestFunctional/parallel/ConfigCmd 0.56
79 TestFunctional/parallel/DashboardCmd 3.19
80 TestFunctional/parallel/DryRun 0.86
81 TestFunctional/parallel/InternationalLanguage 0.29
82 TestFunctional/parallel/StatusCmd 1.54
85 TestFunctional/parallel/ServiceCmd 11.76
86 TestFunctional/parallel/AddonsCmd 0.25
87 TestFunctional/parallel/PersistentVolumeClaim 32.43
89 TestFunctional/parallel/SSHCmd 0.94
90 TestFunctional/parallel/CpCmd 1.91
91 TestFunctional/parallel/MySQL 25.11
92 TestFunctional/parallel/FileSync 0.51
93 TestFunctional/parallel/CertSync 3.1
97 TestFunctional/parallel/NodeLabels 0.08
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.99
101 TestFunctional/parallel/ProfileCmd/profile_not_create 0.68
102 TestFunctional/parallel/Version/short 0.11
103 TestFunctional/parallel/Version/components 3.04
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.27
108 TestFunctional/parallel/ProfileCmd/profile_list 0.61
109 TestFunctional/parallel/ProfileCmd/profile_json_output 0.59
110 TestFunctional/parallel/MountCmd/any-port 6.2
111 TestFunctional/parallel/MountCmd/specific-port 2.53
112 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
113 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
117 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
122 TestFunctional/parallel/ImageCommands/ImageBuild 3.11
123 TestFunctional/parallel/ImageCommands/Setup 1.18
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 7.25
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.43
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.3
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.3
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 8.11
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.06
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.35
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.63
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.67
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.32
134 TestFunctional/delete_addon-resizer_images 0.12
135 TestFunctional/delete_my-image_image 0.04
136 TestFunctional/delete_minikube_cached_images 0.03
139 TestIngressAddonLegacy/StartLegacyK8sCluster 93.02
141 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.42
142 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.45
143 TestIngressAddonLegacy/serial/ValidateIngressAddons 47.01
146 TestJSONOutput/start/Command 59.57
147 TestJSONOutput/start/Audit 0
149 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
150 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
152 TestJSONOutput/pause/Command 0.81
153 TestJSONOutput/pause/Audit 0
155 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
156 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
158 TestJSONOutput/unpause/Command 0.73
159 TestJSONOutput/unpause/Audit 0
161 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
162 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
164 TestJSONOutput/stop/Command 23.98
165 TestJSONOutput/stop/Audit 0
167 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
169 TestErrorJSONOutput 0.33
171 TestKicCustomNetwork/create_custom_network 38.5
172 TestKicCustomNetwork/use_default_bridge_network 30.73
173 TestKicExistingNetwork 32.25
174 TestMainNoArgs 0.06
177 TestMountStart/serial/StartWithMountFirst 4.72
178 TestMountStart/serial/VerifyMountFirst 0.37
179 TestMountStart/serial/StartWithMountSecond 4.54
180 TestMountStart/serial/VerifyMountSecond 0.37
181 TestMountStart/serial/DeleteFirst 5.97
182 TestMountStart/serial/VerifyMountPostDelete 0.36
183 TestMountStart/serial/Stop 1.29
184 TestMountStart/serial/RestartStopped 5.98
185 TestMountStart/serial/VerifyMountPostStop 0.37
188 TestMultiNode/serial/FreshStart2Nodes 108.16
189 TestMultiNode/serial/DeployApp2Nodes 3.54
190 TestMultiNode/serial/PingHostFrom2Pods 0.92
191 TestMultiNode/serial/AddNode 44.68
192 TestMultiNode/serial/ProfileList 0.45
193 TestMultiNode/serial/CopyFile 13.41
194 TestMultiNode/serial/StopNode 21.64
195 TestMultiNode/serial/StartAfterStop 36.8
196 TestMultiNode/serial/RestartKeepsNodes 193.46
197 TestMultiNode/serial/DeleteNode 24.78
198 TestMultiNode/serial/StopMultiNode 40.67
199 TestMultiNode/serial/RestartMultiNode 98.35
200 TestMultiNode/serial/ValidateNameConflict 47.75
205 TestPreload 158.65
207 TestScheduledStopUnix 121.12
210 TestInsufficientStorage 19.85
211 TestRunningBinaryUpgrade 158.95
213 TestKubernetesUpgrade 215.59
214 TestMissingContainerUpgrade 147.99
216 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
217 TestStoppedBinaryUpgrade/Setup 0.5
218 TestNoKubernetes/serial/StartWithK8s 70.4
219 TestStoppedBinaryUpgrade/Upgrade 154.87
220 TestNoKubernetes/serial/StartWithStopK8s 7.12
228 TestNetworkPlugins/group/false 1.21
229 TestNoKubernetes/serial/Start 4.58
233 TestNoKubernetes/serial/VerifyK8sNotRunning 0.46
234 TestNoKubernetes/serial/ProfileList 1.8
235 TestNoKubernetes/serial/Stop 5.89
236 TestNoKubernetes/serial/StartNoArgs 6.54
237 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.45
238 TestStoppedBinaryUpgrade/MinikubeLogs 1
247 TestPause/serial/Start 67.76
248 TestNetworkPlugins/group/auto/Start 82.85
249 TestNetworkPlugins/group/custom-weave/Start 83.02
250 TestPause/serial/SecondStartNoReconfiguration 16.49
251 TestPause/serial/Pause 1.03
252 TestPause/serial/VerifyStatus 0.56
253 TestPause/serial/Unpause 1.03
254 TestPause/serial/PauseAgain 5.6
255 TestPause/serial/DeletePaused 4.22
256 TestPause/serial/VerifyDeletedResources 0.97
257 TestNetworkPlugins/group/cilium/Start 111.24
258 TestNetworkPlugins/group/auto/KubeletFlags 0.53
259 TestNetworkPlugins/group/auto/NetCatPod 11.5
260 TestNetworkPlugins/group/auto/DNS 0.19
261 TestNetworkPlugins/group/auto/Localhost 0.14
262 TestNetworkPlugins/group/auto/HairPin 0.17
263 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.76
264 TestNetworkPlugins/group/custom-weave/NetCatPod 10.58
266 TestNetworkPlugins/group/enable-default-cni/Start 331.4
268 TestNetworkPlugins/group/cilium/ControllerPod 5.02
269 TestNetworkPlugins/group/cilium/KubeletFlags 0.44
270 TestNetworkPlugins/group/cilium/NetCatPod 10.14
271 TestNetworkPlugins/group/cilium/DNS 0.2
272 TestNetworkPlugins/group/cilium/Localhost 0.15
273 TestNetworkPlugins/group/cilium/HairPin 0.15
274 TestNetworkPlugins/group/bridge/Start 69.85
275 TestNetworkPlugins/group/bridge/KubeletFlags 0.44
276 TestNetworkPlugins/group/bridge/NetCatPod 10.3
280 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.56
281 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.12
284 TestStartStop/group/no-preload/serial/FirstStart 75.17
287 TestStartStop/group/no-preload/serial/DeployApp 8.39
288 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.88
289 TestStartStop/group/no-preload/serial/Stop 20.53
291 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
292 TestStartStop/group/no-preload/serial/SecondStart 58.31
293 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
294 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.28
295 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.42
296 TestStartStop/group/no-preload/serial/Pause 3.81
300 TestStartStop/group/newest-cni/serial/FirstStart 66.65
301 TestStartStop/group/newest-cni/serial/DeployApp 0
302 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.88
303 TestStartStop/group/newest-cni/serial/Stop 20.35
304 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
305 TestStartStop/group/newest-cni/serial/SecondStart 53.33
306 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
307 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
308 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.45
309 TestStartStop/group/newest-cni/serial/Pause 3.83
312 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.76
313 TestStartStop/group/old-k8s-version/serial/Stop 20.4
314 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
316 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.74
317 TestStartStop/group/embed-certs/serial/Stop 20.29
318 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
321 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.74
322 TestStartStop/group/default-k8s-different-port/serial/Stop 20.37
323 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.23
x
+
TestDownloadOnly/v1.16.0/json-events (6.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20211231094141-6736 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20211231094141-6736 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.214489933s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (6.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20211231094141-6736
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20211231094141-6736: exit status 85 (88.481032ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/31 09:41:41
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.17.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1231 09:41:41.134706    6748 out.go:297] Setting OutFile to fd 1 ...
	I1231 09:41:41.134837    6748 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 09:41:41.134850    6748 out.go:310] Setting ErrFile to fd 2...
	I1231 09:41:41.134859    6748 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 09:41:41.134984    6748 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	W1231 09:41:41.135122    6748 root.go:293] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/config/config.json: no such file or directory
	I1231 09:41:41.135381    6748 out.go:304] Setting JSON to true
	I1231 09:41:41.136317    6748 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1456,"bootTime":1640942245,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 09:41:41.136411    6748 start.go:122] virtualization: kvm guest
	W1231 09:41:41.141279    6748 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball: no such file or directory
	I1231 09:41:41.141361    6748 notify.go:174] Checking for updates...
	I1231 09:41:41.144545    6748 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 09:41:41.193354    6748 docker.go:132] docker version: linux-20.10.12
	I1231 09:41:41.193499    6748 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 09:41:41.691855    6748 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:33 SystemTime:2021-12-31 09:41:41.224120934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 09:41:41.691959    6748 docker.go:237] overlay module found
	I1231 09:41:41.694825    6748 start.go:280] selected driver: docker
	I1231 09:41:41.694857    6748 start.go:795] validating driver "docker" against <nil>
	I1231 09:41:41.694879    6748 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 09:41:41.694887    6748 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	I1231 09:41:41.695076    6748 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 09:41:41.792160    6748 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:33 SystemTime:2021-12-31 09:41:41.722925503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 09:41:41.792325    6748 start_flags.go:284] no existing cluster config was found, will generate one from the flags 
	I1231 09:41:41.792786    6748 start_flags.go:365] Using suggested 8000MB memory alloc based on sys=32109MB, container=32109MB
	I1231 09:41:41.792886    6748 start_flags.go:792] Wait components to verify : map[apiserver:true system_pods:true]
	I1231 09:41:41.792904    6748 cni.go:93] Creating CNI manager for ""
	I1231 09:41:41.792909    6748 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 09:41:41.792921    6748 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1231 09:41:41.792929    6748 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1231 09:41:41.792933    6748 start_flags.go:293] Found "CNI" CNI - setting NetworkPlugin=cni
	I1231 09:41:41.792940    6748 start_flags.go:298] config:
	{Name:download-only-20211231094141-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20211231094141-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 09:41:41.795600    6748 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 09:41:41.797660    6748 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1231 09:41:41.797761    6748 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 09:41:41.827518    6748 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v16-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I1231 09:41:41.827556    6748 cache.go:57] Caching tarball of preloaded images
	I1231 09:41:41.827880    6748 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1231 09:41:41.830619    6748 preload.go:238] getting checksum for preloaded-images-k8s-v16-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1231 09:41:41.838354    6748 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 09:41:41.838390    6748 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 09:41:41.872546    6748 download.go:100] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v16-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:b655e1c5b01c5f3697280fe6afcc7920 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I1231 09:41:45.651240    6748 preload.go:248] saving checksum for preloaded-images-k8s-v16-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1231 09:41:45.651327    6748 preload.go:255] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211231094141-6736"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.1/json-events (5.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20211231094141-6736 --force --alsologtostderr --kubernetes-version=v1.23.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20211231094141-6736 --force --alsologtostderr --kubernetes-version=v1.23.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.64416934s)
--- PASS: TestDownloadOnly/v1.23.1/json-events (5.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.1/preload-exists
--- PASS: TestDownloadOnly/v1.23.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.1/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20211231094141-6736
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20211231094141-6736: exit status 85 (76.86074ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/31 09:41:47
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.17.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1231 09:41:47.432995    6896 out.go:297] Setting OutFile to fd 1 ...
	I1231 09:41:47.433078    6896 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 09:41:47.433082    6896 out.go:310] Setting ErrFile to fd 2...
	I1231 09:41:47.433086    6896 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 09:41:47.433187    6896 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	W1231 09:41:47.433299    6896 root.go:293] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/config/config.json: no such file or directory
	I1231 09:41:47.433406    6896 out.go:304] Setting JSON to true
	I1231 09:41:47.434202    6896 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1462,"bootTime":1640942245,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 09:41:47.434278    6896 start.go:122] virtualization: kvm guest
	I1231 09:41:47.437341    6896 notify.go:174] Checking for updates...
	I1231 09:41:47.440223    6896 config.go:176] Loaded profile config "download-only-20211231094141-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W1231 09:41:47.440315    6896 start.go:703] api.Load failed for download-only-20211231094141-6736: filestore "download-only-20211231094141-6736": Docker machine "download-only-20211231094141-6736" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1231 09:41:47.440364    6896 driver.go:344] Setting default libvirt URI to qemu:///system
	W1231 09:41:47.440394    6896 start.go:703] api.Load failed for download-only-20211231094141-6736: filestore "download-only-20211231094141-6736": Docker machine "download-only-20211231094141-6736" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1231 09:41:47.481603    6896 docker.go:132] docker version: linux-20.10.12
	I1231 09:41:47.481713    6896 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 09:41:47.571680    6896 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:33 SystemTime:2021-12-31 09:41:47.510973132 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 09:41:47.571798    6896 docker.go:237] overlay module found
	I1231 09:41:47.574508    6896 start.go:280] selected driver: docker
	I1231 09:41:47.574531    6896 start.go:795] validating driver "docker" against &{Name:download-only-20211231094141-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20211231094141-6736 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker}
	I1231 09:41:47.574626    6896 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 09:41:47.574634    6896 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	I1231 09:41:47.574817    6896 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 09:41:47.664565    6896 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:33 SystemTime:2021-12-31 09:41:47.603537991 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 09:41:47.665130    6896 cni.go:93] Creating CNI manager for ""
	I1231 09:41:47.665147    6896 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 09:41:47.665162    6896 start_flags.go:298] config:
	{Name:download-only-20211231094141-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:download-only-20211231094141-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 09:41:47.668960    6896 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 09:41:47.671935    6896 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 09:41:47.672061    6896 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 09:41:47.706897    6896 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4
	I1231 09:41:47.706929    6896 cache.go:57] Caching tarball of preloaded images
	I1231 09:41:47.707224    6896 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime containerd
	I1231 09:41:47.708449    6896 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 09:41:47.708471    6896 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 09:41:47.710276    6896 preload.go:238] getting checksum for preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4 ...
	I1231 09:41:47.750726    6896 download.go:100] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:d038ade7b86a8c7338b2822e0cab8959 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4
	I1231 09:41:51.319208    6896 preload.go:248] saving checksum for preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4 ...
	I1231 09:41:51.319304    6896 preload.go:255] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-containerd-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211231094141-6736"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2-rc.0/json-events (10.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20211231094141-6736 --force --alsologtostderr --kubernetes-version=v1.23.2-rc.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20211231094141-6736 --force --alsologtostderr --kubernetes-version=v1.23.2-rc.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (10.828176016s)
--- PASS: TestDownloadOnly/v1.23.2-rc.0/json-events (10.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.23.2-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2-rc.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20211231094141-6736
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20211231094141-6736: exit status 85 (88.400299ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/31 09:41:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.17.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1231 09:41:53.158303    7043 out.go:297] Setting OutFile to fd 1 ...
	I1231 09:41:53.158387    7043 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 09:41:53.158391    7043 out.go:310] Setting ErrFile to fd 2...
	I1231 09:41:53.158396    7043 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 09:41:53.158494    7043 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	W1231 09:41:53.158597    7043 root.go:293] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/config/config.json: no such file or directory
	I1231 09:41:53.158719    7043 out.go:304] Setting JSON to true
	I1231 09:41:53.159469    7043 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1468,"bootTime":1640942245,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 09:41:53.159537    7043 start.go:122] virtualization: kvm guest
	I1231 09:41:53.162449    7043 notify.go:174] Checking for updates...
	I1231 09:41:53.165538    7043 config.go:176] Loaded profile config "download-only-20211231094141-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	W1231 09:41:53.165591    7043 start.go:703] api.Load failed for download-only-20211231094141-6736: filestore "download-only-20211231094141-6736": Docker machine "download-only-20211231094141-6736" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1231 09:41:53.165644    7043 driver.go:344] Setting default libvirt URI to qemu:///system
	W1231 09:41:53.165675    7043 start.go:703] api.Load failed for download-only-20211231094141-6736: filestore "download-only-20211231094141-6736": Docker machine "download-only-20211231094141-6736" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1231 09:41:53.204426    7043 docker.go:132] docker version: linux-20.10.12
	I1231 09:41:53.204543    7043 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 09:41:53.300378    7043 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2021-12-31 09:41:53.233308412 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 09:41:53.300480    7043 docker.go:237] overlay module found
	I1231 09:41:53.303835    7043 start.go:280] selected driver: docker
	I1231 09:41:53.303861    7043 start.go:795] validating driver "docker" against &{Name:download-only-20211231094141-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:download-only-20211231094141-6736 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker}
	I1231 09:41:53.303990    7043 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 09:41:53.303998    7043 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	I1231 09:41:53.304200    7043 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 09:41:53.394995    7043 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2021-12-31 09:41:53.334201348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 09:41:53.395759    7043 cni.go:93] Creating CNI manager for ""
	I1231 09:41:53.395784    7043 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I1231 09:41:53.395797    7043 start_flags.go:298] config:
	{Name:download-only-20211231094141-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2-rc.0 ClusterName:download-only-20211231094141-6736 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 09:41:53.398104    7043 cache.go:118] Beginning downloading kic base image for docker with containerd
	I1231 09:41:53.399824    7043 preload.go:132] Checking if preload exists for k8s version v1.23.2-rc.0 and runtime containerd
	I1231 09:41:53.399871    7043 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I1231 09:41:53.435909    7043 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I1231 09:41:53.435933    7043 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I1231 09:41:53.437576    7043 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v16-v1.23.2-rc.0-containerd-overlay2-amd64.tar.lz4
	I1231 09:41:53.437599    7043 cache.go:57] Caching tarball of preloaded images
	I1231 09:41:53.437934    7043 preload.go:132] Checking if preload exists for k8s version v1.23.2-rc.0 and runtime containerd
	I1231 09:41:53.440626    7043 preload.go:238] getting checksum for preloaded-images-k8s-v16-v1.23.2-rc.0-containerd-overlay2-amd64.tar.lz4 ...
	I1231 09:41:53.487164    7043 download.go:100] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v16-v1.23.2-rc.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:faabbb8aea6d8b0b4e2e4a88dd990bd6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.2-rc.0-containerd-overlay2-amd64.tar.lz4
	I1231 09:42:01.453164    7043 preload.go:248] saving checksum for preloaded-images-k8s-v16-v1.23.2-rc.0-containerd-overlay2-amd64.tar.lz4 ...
	I1231 09:42:01.453272    7043 preload.go:255] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.2-rc.0-containerd-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211231094141-6736"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.2-rc.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.39s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20211231094141-6736
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnlyKic (11.77s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20211231094204-6736 --force --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:226: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20211231094204-6736 --force --alsologtostderr --driver=docker  --container-runtime=containerd: (10.348620971s)
helpers_test.go:176: Cleaning up "download-docker-20211231094204-6736" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20211231094204-6736
--- PASS: TestDownloadOnlyKic (11.77s)

                                                
                                    
x
+
TestOffline (101.5s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20211231101250-6736 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20211231101250-6736 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (1m33.50132588s)
helpers_test.go:176: Cleaning up "offline-containerd-20211231101250-6736" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20211231101250-6736

                                                
                                                
=== CONT  TestOffline
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20211231101250-6736: (7.995255698s)
--- PASS: TestOffline (101.50s)

                                                
                                    
x
+
TestAddons/Setup (144.92s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20211231094216-6736 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-linux-amd64 start -p addons-20211231094216-6736 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m24.918772329s)
--- PASS: TestAddons/Setup (144.92s)

                                                
                                    
x
+
TestAddons/parallel/Registry (12.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:281: registry stabilized in 15.677841ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-k72l8" [b18eebf2-5e5c-4232-921a-ae901868985d] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011077999s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-proxy-wg97n" [1c077c27-6b90-4f56-9f2b-4fb7eb5f2003] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009638795s
addons_test.go:291: (dbg) Run:  kubectl --context addons-20211231094216-6736 delete po -l run=registry-test --now
addons_test.go:296: (dbg) Run:  kubectl --context addons-20211231094216-6736 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:296: (dbg) Done: kubectl --context addons-20211231094216-6736 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (1.553939305s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211231094216-6736 ip
2021/12/31 09:44:53 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:339: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211231094216-6736 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (12.74s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (45.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:163: (dbg) Run:  kubectl --context addons-20211231094216-6736 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:163: (dbg) Done: kubectl --context addons-20211231094216-6736 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (3.796223461s)
addons_test.go:183: (dbg) Run:  kubectl --context addons-20211231094216-6736 replace --force -f testdata/nginx-ingress-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:196: (dbg) Run:  kubectl --context addons-20211231094216-6736 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [5c9be849-d35f-402c-ba5f-7a80ea1b4575] Pending
helpers_test.go:343: "nginx" [5c9be849-d35f-402c-ba5f-7a80ea1b4575] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [5c9be849-d35f-402c-ba5f-7a80ea1b4575] Running
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.017492843s
addons_test.go:213: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211231094216-6736 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:237: (dbg) Run:  kubectl --context addons-20211231094216-6736 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211231094216-6736 ip
addons_test.go:248: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211231094216-6736 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211231094216-6736 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p addons-20211231094216-6736 addons disable ingress --alsologtostderr -v=1: (29.329226078s)
--- PASS: TestAddons/parallel/Ingress (45.72s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:358: metrics-server stabilized in 15.340695ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:343: "metrics-server-6b76bd68b6-d4gzl" [7a68dd73-6671-47d4-b292-090068e8dc12] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.012057139s
addons_test.go:366: (dbg) Run:  kubectl --context addons-20211231094216-6736 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211231094216-6736 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.74s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (8.13s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:407: tiller-deploy stabilized in 23.497698ms
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:343: "tiller-deploy-6d67d5465d-bjcsz" [1a9867ff-377e-4d84-819f-3b7c96277bc2] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009727451s
addons_test.go:424: (dbg) Run:  kubectl --context addons-20211231094216-6736 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:424: (dbg) Done: kubectl --context addons-20211231094216-6736 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (2.546048443s)
addons_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211231094216-6736 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (8.13s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.87s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:512: csi-hostpath-driver pods stabilized in 22.646336ms
addons_test.go:515: (dbg) Run:  kubectl --context addons-20211231094216-6736 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:515: (dbg) Done: kubectl --context addons-20211231094216-6736 create -f testdata/csi-hostpath-driver/pvc.yaml: (1.121862327s)
addons_test.go:520: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20211231094216-6736 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:525: (dbg) Run:  kubectl --context addons-20211231094216-6736 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:530: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [5914bf76-a8ba-49a5-a568-553f2410ad8d] Pending
helpers_test.go:343: "task-pv-pod" [5914bf76-a8ba-49a5-a568-553f2410ad8d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [5914bf76-a8ba-49a5-a568-553f2410ad8d] Running
addons_test.go:530: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 27.00714515s
addons_test.go:535: (dbg) Run:  kubectl --context addons-20211231094216-6736 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:540: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20211231094216-6736 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20211231094216-6736 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:545: (dbg) Run:  kubectl --context addons-20211231094216-6736 delete pod task-pv-pod
addons_test.go:545: (dbg) Done: kubectl --context addons-20211231094216-6736 delete pod task-pv-pod: (1.10323916s)
addons_test.go:551: (dbg) Run:  kubectl --context addons-20211231094216-6736 delete pvc hpvc
addons_test.go:557: (dbg) Run:  kubectl --context addons-20211231094216-6736 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:562: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20211231094216-6736 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:567: (dbg) Run:  kubectl --context addons-20211231094216-6736 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:572: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [5583bf17-7db4-48bc-8757-85db10d35765] Pending
helpers_test.go:343: "task-pv-pod-restore" [5583bf17-7db4-48bc-8757-85db10d35765] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:343: "task-pv-pod-restore" [5583bf17-7db4-48bc-8757-85db10d35765] Running
addons_test.go:572: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.005703336s
addons_test.go:577: (dbg) Run:  kubectl --context addons-20211231094216-6736 delete pod task-pv-pod-restore
addons_test.go:581: (dbg) Run:  kubectl --context addons-20211231094216-6736 delete pvc hpvc-restore
addons_test.go:585: (dbg) Run:  kubectl --context addons-20211231094216-6736 delete volumesnapshot new-snapshot-demo
addons_test.go:589: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211231094216-6736 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:589: (dbg) Done: out/minikube-linux-amd64 -p addons-20211231094216-6736 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.175324458s)
addons_test.go:593: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211231094216-6736 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.87s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (40.64s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:604: (dbg) Run:  kubectl --context addons-20211231094216-6736 create -f testdata/busybox.yaml
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [c9f70120-5588-4136-9112-4cae3791b181] Pending
helpers_test.go:343: "busybox" [c9f70120-5588-4136-9112-4cae3791b181] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [c9f70120-5588-4136-9112-4cae3791b181] Running
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.0064862s
addons_test.go:616: (dbg) Run:  kubectl --context addons-20211231094216-6736 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:653: (dbg) Run:  kubectl --context addons-20211231094216-6736 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:666: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211231094216-6736 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:666: (dbg) Done: out/minikube-linux-amd64 -p addons-20211231094216-6736 addons disable gcp-auth --alsologtostderr -v=1: (6.139495391s)
addons_test.go:682: (dbg) Run:  out/minikube-linux-amd64 -p addons-20211231094216-6736 addons enable gcp-auth
addons_test.go:682: (dbg) Done: out/minikube-linux-amd64 -p addons-20211231094216-6736 addons enable gcp-auth: (2.990737252s)
addons_test.go:688: (dbg) Run:  kubectl --context addons-20211231094216-6736 apply -f testdata/private-image.yaml
addons_test.go:695: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:343: "private-image-7f8587d5b7-5sjsq" [ef12655a-8fcf-41b4-aded-defe36c796a8] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:343: "private-image-7f8587d5b7-5sjsq" [ef12655a-8fcf-41b4-aded-defe36c796a8] Running
addons_test.go:695: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 13.008216886s
addons_test.go:701: (dbg) Run:  kubectl --context addons-20211231094216-6736 apply -f testdata/private-image-eu.yaml
addons_test.go:706: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:343: "private-image-eu-869dcfd8c7-jrp7b" [b85365a1-ed38-43cd-aaa8-ae098d60dc62] Pending
helpers_test.go:343: "private-image-eu-869dcfd8c7-jrp7b" [b85365a1-ed38-43cd-aaa8-ae098d60dc62] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:343: "private-image-eu-869dcfd8c7-jrp7b" [b85365a1-ed38-43cd-aaa8-ae098d60dc62] Running
addons_test.go:706: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 9.01099668s
--- PASS: TestAddons/serial/GCPAuth (40.64s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (20.58s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:133: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20211231094216-6736
addons_test.go:133: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20211231094216-6736: (20.37067008s)
addons_test.go:137: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20211231094216-6736
addons_test.go:141: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20211231094216-6736
--- PASS: TestAddons/StoppedEnableDisable (20.58s)

                                                
                                    
x
+
TestCertOptions (61.6s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20211231101435-6736 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E1231 10:14:41.557681    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20211231101435-6736 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (57.824065845s)
cert_options_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20211231101435-6736 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:89: (dbg) Run:  kubectl --context cert-options-20211231101435-6736 config view
cert_options_test.go:101: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-20211231101435-6736 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-20211231101435-6736" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20211231101435-6736
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20211231101435-6736: (2.789910333s)
--- PASS: TestCertOptions (61.60s)

                                                
                                    
x
+
TestCertExpiration (259.27s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20211231101410-6736 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20211231101410-6736 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (59.198805498s)
E1231 10:15:10.310346    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20211231101410-6736 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20211231101410-6736 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (16.928197782s)
helpers_test.go:176: Cleaning up "cert-expiration-20211231101410-6736" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20211231101410-6736
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20211231101410-6736: (3.143125646s)
--- PASS: TestCertExpiration (259.27s)

                                                
                                    
x
+
TestForceSystemdFlag (265.79s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20211231101432-6736 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20211231101432-6736 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (4m21.983010495s)
docker_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20211231101432-6736 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-20211231101432-6736" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20211231101432-6736

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20211231101432-6736: (3.277623916s)
--- PASS: TestForceSystemdFlag (265.79s)

                                                
                                    
x
+
TestForceSystemdEnv (75.5s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20211231101250-6736 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20211231101250-6736 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m9.433166283s)
docker_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20211231101250-6736 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-20211231101250-6736" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20211231101250-6736

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20211231101250-6736: (5.601845761s)
--- PASS: TestForceSystemdEnv (75.50s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.95s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.95s)

                                                
                                    
x
+
TestErrorSpam/setup (42.96s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20211231094640-6736 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20211231094640-6736 --driver=docker  --container-runtime=containerd
error_spam_test.go:79: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20211231094640-6736 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20211231094640-6736 --driver=docker  --container-runtime=containerd: (42.962490501s)
error_spam_test.go:89: acceptable stderr: "! Your cgroup does not allow setting memory."
--- PASS: TestErrorSpam/setup (42.96s)

                                                
                                    
x
+
TestErrorSpam/start (1.04s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211231094640-6736 --log_dir /tmp/nospam-20211231094640-6736 start --dry-run
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211231094640-6736 --log_dir /tmp/nospam-20211231094640-6736 start --dry-run
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211231094640-6736 --log_dir /tmp/nospam-20211231094640-6736 start --dry-run
--- PASS: TestErrorSpam/start (1.04s)

                                                
                                    
x
+
TestErrorSpam/status (1.31s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211231094640-6736 --log_dir /tmp/nospam-20211231094640-6736 status
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211231094640-6736 --log_dir /tmp/nospam-20211231094640-6736 status
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211231094640-6736 --log_dir /tmp/nospam-20211231094640-6736 status
--- PASS: TestErrorSpam/status (1.31s)

                                                
                                    
x
+
TestErrorSpam/pause (1.93s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211231094640-6736 --log_dir /tmp/nospam-20211231094640-6736 pause
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211231094640-6736 --log_dir /tmp/nospam-20211231094640-6736 pause
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211231094640-6736 --log_dir /tmp/nospam-20211231094640-6736 pause
--- PASS: TestErrorSpam/pause (1.93s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211231094640-6736 --log_dir /tmp/nospam-20211231094640-6736 unpause
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211231094640-6736 --log_dir /tmp/nospam-20211231094640-6736 unpause
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211231094640-6736 --log_dir /tmp/nospam-20211231094640-6736 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (14.92s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211231094640-6736 --log_dir /tmp/nospam-20211231094640-6736 stop
error_spam_test.go:157: (dbg) Done: out/minikube-linux-amd64 -p nospam-20211231094640-6736 --log_dir /tmp/nospam-20211231094640-6736 stop: (14.630303691s)
error_spam_test.go:157: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211231094640-6736 --log_dir /tmp/nospam-20211231094640-6736 stop
error_spam_test.go:180: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20211231094640-6736 --log_dir /tmp/nospam-20211231094640-6736 stop
--- PASS: TestErrorSpam/stop (14.92s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1693: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/files/etc/test/nested/copy/6736/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (60.34s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2075: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20211231094749-6736 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2075: (dbg) Done: out/minikube-linux-amd64 start -p functional-20211231094749-6736 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m0.336594266s)
--- PASS: TestFunctional/serial/StartWithProxy (60.34s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.89s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:645: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20211231094749-6736 --alsologtostderr -v=8
functional_test.go:645: (dbg) Done: out/minikube-linux-amd64 start -p functional-20211231094749-6736 --alsologtostderr -v=8: (15.893392076s)
functional_test.go:649: soft start took 15.89406387s for "functional-20211231094749-6736" cluster.
--- PASS: TestFunctional/serial/SoftStart (15.89s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:667: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:682: (dbg) Run:  kubectl --context functional-20211231094749-6736 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1028: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 cache add k8s.gcr.io/pause:3.1
functional_test.go:1028: (dbg) Done: out/minikube-linux-amd64 -p functional-20211231094749-6736 cache add k8s.gcr.io/pause:3.1: (1.311384685s)
functional_test.go:1028: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 cache add k8s.gcr.io/pause:3.3
functional_test.go:1028: (dbg) Done: out/minikube-linux-amd64 -p functional-20211231094749-6736 cache add k8s.gcr.io/pause:3.3: (1.197229549s)
functional_test.go:1028: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 cache add k8s.gcr.io/pause:latest
functional_test.go:1028: (dbg) Done: out/minikube-linux-amd64 -p functional-20211231094749-6736 cache add k8s.gcr.io/pause:latest: (1.069157519s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1059: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20211231094749-6736 /tmp/functional-20211231094749-6736274225841
functional_test.go:1071: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 cache add minikube-local-cache-test:functional-20211231094749-6736
functional_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 cache delete minikube-local-cache-test:functional-20211231094749-6736
functional_test.go:1065: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20211231094749-6736
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1092: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1135: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1135: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (401.84504ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1140: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 cache reload
functional_test.go:1140: (dbg) Done: out/minikube-linux-amd64 -p functional-20211231094749-6736 cache reload: (1.334261968s)
functional_test.go:1145: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:702: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 kubectl -- --context functional-20211231094749-6736 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:727: (dbg) Run:  out/kubectl --context functional-20211231094749-6736 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (52.86s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:743: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20211231094749-6736 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1231 09:49:41.557360    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
E1231 09:49:41.563367    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
E1231 09:49:41.573655    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
E1231 09:49:41.594012    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
E1231 09:49:41.634510    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
E1231 09:49:41.714866    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
E1231 09:49:41.875387    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
E1231 09:49:42.196045    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
E1231 09:49:42.837049    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
E1231 09:49:44.117774    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
E1231 09:49:46.678895    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
E1231 09:49:51.799951    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
E1231 09:50:02.040940    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
functional_test.go:743: (dbg) Done: out/minikube-linux-amd64 start -p functional-20211231094749-6736 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (52.855820561s)
functional_test.go:747: restart took 52.855926081s for "functional-20211231094749-6736" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (52.86s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:797: (dbg) Run:  kubectl --context functional-20211231094749-6736 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:812: etcd phase: Running
functional_test.go:822: etcd status: Ready
functional_test.go:812: kube-apiserver phase: Running
functional_test.go:822: kube-apiserver status: Ready
functional_test.go:812: kube-controller-manager phase: Running
functional_test.go:822: kube-controller-manager status: Ready
functional_test.go:812: kube-scheduler phase: Running
functional_test.go:822: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1218: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 logs
functional_test.go:1218: (dbg) Done: out/minikube-linux-amd64 -p functional-20211231094749-6736 logs: (1.075155579s)
--- PASS: TestFunctional/serial/LogsCmd (1.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1235: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 logs --file /tmp/functional-20211231094749-67364007579149/logs.txt
functional_test.go:1235: (dbg) Done: out/minikube-linux-amd64 -p functional-20211231094749-6736 logs --file /tmp/functional-20211231094749-67364007579149/logs.txt: (1.072367281s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1181: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1181: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1181: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211231094749-6736 config get cpus: exit status 14 (87.247581ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1181: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 config set cpus 2
functional_test.go:1181: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 config get cpus
functional_test.go:1181: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 config unset cpus
functional_test.go:1181: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1181: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211231094749-6736 config get cpus: exit status 14 (78.827692ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (3.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:892: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20211231094749-6736 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20211231094749-6736 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to kill pid 40778: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (3.19s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:957: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20211231094749-6736 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:957: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20211231094749-6736 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (381.159696ms)

                                                
                                                
-- stdout --
	* [functional-20211231094749-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1231 09:50:21.465423   39519 out.go:297] Setting OutFile to fd 1 ...
	I1231 09:50:21.465521   39519 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 09:50:21.465526   39519 out.go:310] Setting ErrFile to fd 2...
	I1231 09:50:21.465531   39519 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 09:50:21.465667   39519 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 09:50:21.465949   39519 out.go:304] Setting JSON to false
	I1231 09:50:21.467837   39519 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1976,"bootTime":1640942245,"procs":564,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 09:50:21.467993   39519 start.go:122] virtualization: kvm guest
	I1231 09:50:21.474080   39519 out.go:176] * [functional-20211231094749-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 09:50:21.478769   39519 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 09:50:21.482023   39519 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 09:50:21.485911   39519 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 09:50:21.488888   39519 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 09:50:21.493406   39519 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 09:50:21.494280   39519 config.go:176] Loaded profile config "functional-20211231094749-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 09:50:21.494973   39519 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 09:50:21.566274   39519 docker.go:132] docker version: linux-20.10.12
	I1231 09:50:21.566396   39519 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 09:50:21.711860   39519 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:59 SystemTime:2021-12-31 09:50:21.60970348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 09:50:21.711990   39519 docker.go:237] overlay module found
	I1231 09:50:21.729120   39519 out.go:176] * Using the docker driver based on existing profile
	I1231 09:50:21.729181   39519 start.go:280] selected driver: docker
	I1231 09:50:21.729189   39519 start.go:795] validating driver "docker" against &{Name:functional-20211231094749-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:functional-20211231094749-6736 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nv
idia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 09:50:21.729356   39519 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 09:50:21.729378   39519 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 09:50:21.729386   39519 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 09:50:21.729424   39519 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 09:50:21.729451   39519 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I1231 09:50:21.733148   39519 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 09:50:21.737108   39519 out.go:176] 
	W1231 09:50:21.737308   39519 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1231 09:50:21.740208   39519 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20211231094749-6736 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:999: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20211231094749-6736 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:999: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20211231094749-6736 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (284.821592ms)

                                                
                                                
-- stdout --
	* [functional-20211231094749-6736] minikube v1.24.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	  - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1231 09:50:11.966420   36879 out.go:297] Setting OutFile to fd 1 ...
	I1231 09:50:11.966512   36879 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 09:50:11.966517   36879 out.go:310] Setting ErrFile to fd 2...
	I1231 09:50:11.966522   36879 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 09:50:11.966726   36879 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 09:50:11.967010   36879 out.go:304] Setting JSON to false
	I1231 09:50:11.968580   36879 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1966,"bootTime":1640942245,"procs":546,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 09:50:11.968684   36879 start.go:122] virtualization: kvm guest
	I1231 09:50:11.972927   36879 out.go:176] * [functional-20211231094749-6736] minikube v1.24.0 sur Ubuntu 20.04 (kvm/amd64)
	I1231 09:50:11.975415   36879 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 09:50:11.978064   36879 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 09:50:11.980735   36879 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 09:50:11.983355   36879 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 09:50:11.985862   36879 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 09:50:11.986588   36879 config.go:176] Loaded profile config "functional-20211231094749-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 09:50:11.987127   36879 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 09:50:12.039967   36879 docker.go:132] docker version: linux-20.10.12
	I1231 09:50:12.040072   36879 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 09:50:12.153151   36879 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-12-31 09:50:12.072187588 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 09:50:12.153286   36879 docker.go:237] overlay module found
	I1231 09:50:12.158782   36879 out.go:176] * Utilisation du pilote docker basé sur le profil existant
	I1231 09:50:12.158852   36879 start.go:280] selected driver: docker
	I1231 09:50:12.158862   36879 start.go:795] validating driver "docker" against &{Name:functional-20211231094749-6736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1640212998-13227@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:functional-20211231094749-6736 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:global-housekeeping-interval Value:60m} {Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nv
idia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1231 09:50:12.159517   36879 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 09:50:12.159538   36879 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 09:50:12.159549   36879 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 09:50:12.159857   36879 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 09:50:12.159908   36879 out.go:241] ! Votre groupe de contrôle ne permet pas de définir la mémoire.
	! Votre groupe de contrôle ne permet pas de définir la mémoire.
	I1231 09:50:12.164177   36879 out.go:176]   - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 09:50:12.168155   36879 out.go:176] 
	W1231 09:50:12.168372   36879 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1231 09:50:12.171405   36879 out.go:176] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:841: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:859: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (11.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1417: (dbg) Run:  kubectl --context functional-20211231094749-6736 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1423: (dbg) Run:  kubectl --context functional-20211231094749-6736 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1428: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-54fbb85-lrfrk" [d1a922fa-ab38-4dc1-a7e3-13360d63f918] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-54fbb85-lrfrk" [d1a922fa-ab38-4dc1-a7e3-13360d63f918] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1428: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 9.007982237s
functional_test.go:1433: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1446: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1455: found endpoint: https://192.168.49.2:31352
functional_test.go:1466: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1475: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1481: found endpoint for hello-node: http://192.168.49.2:31352
functional_test.go:1492: Attempting to fetch http://192.168.49.2:31352 ...
functional_test.go:1512: http://192.168.49.2:31352: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-54fbb85-lrfrk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31352
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (11.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1527: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (32.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [4368a2a2-65a8-4038-9c99-02b387e6f4d2] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008644849s
functional_test_pvc_test.go:50: (dbg) Run:  kubectl --context functional-20211231094749-6736 get storageclass -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:70: (dbg) Run:  kubectl --context functional-20211231094749-6736 apply -f testdata/storage-provisioner/pvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20211231094749-6736 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20211231094749-6736 get pvc myclaim -o=json
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20211231094749-6736 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [30e9da5d-90a6-411c-9b10-d258a06e1ef6] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [30e9da5d-90a6-411c-9b10-d258a06e1ef6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [30e9da5d-90a6-411c-9b10-d258a06e1ef6] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.008483964s
functional_test_pvc_test.go:101: (dbg) Run:  kubectl --context functional-20211231094749-6736 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:107: (dbg) Run:  kubectl --context functional-20211231094749-6736 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20211231094749-6736 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [5c51ce31-e282-4153-aba8-cd079268222c] Pending
helpers_test.go:343: "sp-pod" [5c51ce31-e282-4153-aba8-cd079268222c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:343: "sp-pod" [5c51ce31-e282-4153-aba8-cd079268222c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.006952279s
functional_test_pvc_test.go:115: (dbg) Run:  kubectl --context functional-20211231094749-6736 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (32.43s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1562: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1579: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh -n functional-20211231094749-6736 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 cp functional-20211231094749-6736:/home/docker/cp-test.txt /tmp/mk_test2371759346/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh -n functional-20211231094749-6736 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1631: (dbg) Run:  kubectl --context functional-20211231094749-6736 replace --force -f testdata/mysql.yaml
functional_test.go:1637: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
2021/12/31 09:50:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-b87c45988-zf7tz" [5098ed1a-0095-4fab-bf9d-2c1dd517034a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-b87c45988-zf7tz" [5098ed1a-0095-4fab-bf9d-2c1dd517034a] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1637: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.011407598s
functional_test.go:1645: (dbg) Run:  kubectl --context functional-20211231094749-6736 exec mysql-b87c45988-zf7tz -- mysql -ppassword -e "show databases;"
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-20211231094749-6736 exec mysql-b87c45988-zf7tz -- mysql -ppassword -e "show databases;": exit status 1 (223.458041ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1645: (dbg) Run:  kubectl --context functional-20211231094749-6736 exec mysql-b87c45988-zf7tz -- mysql -ppassword -e "show databases;"
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-20211231094749-6736 exec mysql-b87c45988-zf7tz -- mysql -ppassword -e "show databases;": exit status 1 (160.141632ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1645: (dbg) Run:  kubectl --context functional-20211231094749-6736 exec mysql-b87c45988-zf7tz -- mysql -ppassword -e "show databases;"
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-20211231094749-6736 exec mysql-b87c45988-zf7tz -- mysql -ppassword -e "show databases;": exit status 1 (143.61876ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1645: (dbg) Run:  kubectl --context functional-20211231094749-6736 exec mysql-b87c45988-zf7tz -- mysql -ppassword -e "show databases;"
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-20211231094749-6736 exec mysql-b87c45988-zf7tz -- mysql -ppassword -e "show databases;": exit status 1 (133.557555ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1645: (dbg) Run:  kubectl --context functional-20211231094749-6736 exec mysql-b87c45988-zf7tz -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.11s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1767: Checking for existence of /etc/test/nested/copy/6736/hosts within VM
functional_test.go:1769: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh "sudo cat /etc/test/nested/copy/6736/hosts"
E1231 09:50:22.521480    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
functional_test.go:1774: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (3.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1810: Checking for existence of /etc/ssl/certs/6736.pem within VM
functional_test.go:1811: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh "sudo cat /etc/ssl/certs/6736.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1810: Checking for existence of /usr/share/ca-certificates/6736.pem within VM
functional_test.go:1811: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh "sudo cat /usr/share/ca-certificates/6736.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1810: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1811: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1837: Checking for existence of /etc/ssl/certs/67362.pem within VM
functional_test.go:1838: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh "sudo cat /etc/ssl/certs/67362.pem"
functional_test.go:1837: Checking for existence of /usr/share/ca-certificates/67362.pem within VM
functional_test.go:1838: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh "sudo cat /usr/share/ca-certificates/67362.pem"
functional_test.go:1837: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1838: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (3.10s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20211231094749-6736 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1865: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh "sudo systemctl is-active docker": exit status 1 (506.009885ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1865: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh "sudo systemctl is-active crio": exit status 1 (486.89371ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1258: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1263: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2097: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (3.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2111: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2111: (dbg) Done: out/minikube-linux-amd64 -p functional-20211231094749-6736 version -o=json --components: (3.043785371s)
--- PASS: TestFunctional/parallel/Version/components (3.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20211231094749-6736 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20211231094749-6736 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:343: "nginx-svc" [e25720d7-ba06-4be5-a1fa-b42bcab966ae] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:343: "nginx-svc" [e25720d7-ba06-4be5-a1fa-b42bcab966ae] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.015509692s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1298: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1303: Took "527.601573ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1312: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1317: Took "87.182291ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1349: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1354: Took "496.691003ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1362: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1367: Took "94.810134ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20211231094749-6736 /tmp/mounttest4160521580:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1640944212174763072" to /tmp/mounttest4160521580/created-by-test
functional_test_mount_test.go:110: wrote "test-1640944212174763072" to /tmp/mounttest4160521580/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1640944212174763072" to /tmp/mounttest4160521580/test-1640944212174763072
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (475.082348ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh -- ls -la /mount-9p
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 31 09:50 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 31 09:50 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 31 09:50 test-1640944212174763072
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh cat /mount-9p/test-1640944212174763072
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20211231094749-6736 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [3ebcc988-4262-419e-a2bc-301a40a86fde] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [3ebcc988-4262-419e-a2bc-301a40a86fde] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [3ebcc988-4262-419e-a2bc-301a40a86fde] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 2.010145301s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20211231094749-6736 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20211231094749-6736 /tmp/mounttest4160521580:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20211231094749-6736 /tmp/mounttest968783955:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (428.206495ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20211231094749-6736 /tmp/mounttest968783955:/mount-9p --alsologtostderr -v=1 --port 46464] ...

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh "sudo umount -f /mount-9p": exit status 1 (483.580527ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20211231094749-6736 /tmp/mounttest968783955:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20211231094749-6736 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.111.200.106 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-20211231094749-6736 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20211231094749-6736 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.1
k8s.gcr.io/kube-proxy:v1.23.1
k8s.gcr.io/kube-controller-manager:v1.23.1
k8s.gcr.io/kube-apiserver:v1.23.1
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20211231094749-6736
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20211231094749-6736
docker.io/kubernetesui/metrics-scraper:v1.0.7
docker.io/kubernetesui/dashboard:v2.3.1
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20211231094749-6736 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| docker.io/library/mysql                     | 5.7                            | sha256:c20987 | 155MB  |
| docker.io/library/nginx                     | latest                         | sha256:605c77 | 56.7MB |
| k8s.gcr.io/kube-apiserver                   | v1.23.1                        | sha256:b6d7ab | 32.6MB |
| k8s.gcr.io/kube-controller-manager          | v1.23.1                        | sha256:f51846 | 30.2MB |
| k8s.gcr.io/pause                            | 3.1                            | sha256:da86e6 | 315kB  |
| docker.io/kindest/kindnetd                  | v20210326-1e038dc5             | sha256:6de166 | 54MB   |
| docker.io/kubernetesui/metrics-scraper      | v1.0.7                         | sha256:7801cf | 15MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                   | sha256:56cc51 | 2.39MB |
| k8s.gcr.io/echoserver                       | 1.8                            | sha256:82e4c8 | 46.2MB |
| k8s.gcr.io/pause                            | 3.6                            | sha256:6270bb | 302kB  |
| docker.io/kubernetesui/dashboard            | v2.3.1                         | sha256:e1482a | 66.9MB |
| k8s.gcr.io/etcd                             | 3.5.1-0                        | sha256:25f8c7 | 98.9MB |
| k8s.gcr.io/kube-scheduler                   | v1.23.1                        | sha256:71d575 | 15.1MB |
| k8s.gcr.io/pause                            | latest                         | sha256:350b16 | 72.3kB |
| docker.io/library/minikube-local-cache-test | functional-20211231094749-6736 | sha256:e46054 | 1.74kB |
| gcr.io/google-containers/addon-resizer      | functional-20211231094749-6736 | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | sha256:6e38f4 | 9.06MB |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | sha256:a4ca41 | 13.6MB |
| k8s.gcr.io/kube-proxy                       | v1.23.1                        | sha256:b46c42 | 39.3MB |
| k8s.gcr.io/pause                            | 3.3                            | sha256:0184c1 | 298kB  |
| docker.io/library/nginx                     | alpine                         | sha256:cc4422 | 10.2MB |
|---------------------------------------------|--------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20211231094749-6736 image ls --format json:
[{"id":"sha256:c20987f18b130f9d144c9828df630417e2a9523148930dc3963e9d0dab302a76","repoDigests":["docker.io/library/mysql@sha256:f2ad209efe9c67104167fc609cca6973c8422939491c9345270175a300419f94"],"repoTags":["docker.io/library/mysql:5.7"],"size":"154858823"},{"id":"sha256:cc44224bfe208a46fbc45471e8f9416f66b75d6307573e29634e7f42e27a9268","repoDigests":["docker.io/library/nginx@sha256:eb05700fe7baa6890b74278e39b66b2ed1326831f9ec3ed4bdc6361a4ac2f333"],"repoTags":["docker.io/library/nginx:alpine"],"size":"10181739"},{"id":"sha256:605c77e624ddb75e6110f997c58876baa13f8754486b461117934b24a9dc3a85","repoDigests":["docker.io/library/nginx@sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31"],"repoTags":["docker.io/library/nginx:latest"],"size":"56722276"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1
.28.4-glibc"],"size":"2394466"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:b6d7abedde39968d56e9f53aaeea02a4fe6413497c4dedf091868eae09dcc320","repoDigests":["k8s.gcr.io/kube-apiserver@sha256:f54681a71cce62cbc1b13ebb3dbf1d880f849112789811f98b6aebd2caa2f255"],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.1"],"size":"32598867"},{"id":"sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","repoDigests":["docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"],"repoTags":["docker.io/kindest/kindnetd:v20210326-1e038dc5"],"size":"53960776"},{"id":"sha256:e46054869ca11ba1853c2d723a277a8228c2297ccefeb76a5da6b520b0a6cf7b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20211231094749-6736"],"size":"1737
"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"297686"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"315399"},{"id":"sha256:25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":["k8s.gcr.io/etcd@sha256:64b9ea357325d5db9f8a723dcf503b5a449177b17ac87d69481e126bb724c263"],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"98888614"},{"id":"sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":["k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db"],"rep
oTags":["k8s.gcr.io/pause:3.6"],"size":"301773"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"72306"},{"id":"sha256:7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:36d5b3f60e1a144cc5ada820910535074bdf5cf73fb70d1ff1681537eef4e172"],"repoTags":["docker.io/kubernetesui/metrics-scraper:v1.0.7"],"size":"15029138"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20211231094749-6736"],"size":"10823156"},{"id":"sha256:f51846a4fd28801f333d9a13e4a77a96bd52f06e587ba664c2914f015c38e5d1","repoDigests":["k8s.gcr.io/kube-controller-manager@sha256:a7ed87380108a2d811f0d392a3fe87546c85bc366e0d1e024dfa74eb14468604"],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.1"],"size":"30163119"},{"id":"sha256:b46c42588d5116766d0eb259ff372e7c1e3ec
c41a842b0c18a8842083e34d62e","repoDigests":["k8s.gcr.io/kube-proxy@sha256:e40f3a28721588affcf187f3f246d1e078157dabe274003eaa2957a83f7170c8"],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.1"],"size":"39272869"},{"id":"sha256:71d575efe62835f4882115d409a676dd24102215eee650bf23b9cf42af0e7c05","repoDigests":["k8s.gcr.io/kube-scheduler@sha256:8be4eb1593cf9ff2d91b44596633b7815a3753696031a1eb4273d1b39427fa8c"],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.1"],"size":"15129850"},{"id":"sha256:e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570","repoDigests":["docker.io/kubernetesui/dashboard@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e"],"repoTags":["docker.io/kubernetesui/dashboard:v2.3.1"],"size":"66934416"},{"id":"sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":["k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e"],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"13585107"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20211231094749-6736 image ls --format yaml:
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "315399"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "46237695"
- id: sha256:25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests:
- k8s.gcr.io/etcd@sha256:64b9ea357325d5db9f8a723dcf503b5a449177b17ac87d69481e126bb724c263
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "98888614"
- id: sha256:605c77e624ddb75e6110f997c58876baa13f8754486b461117934b24a9dc3a85
repoDigests:
- docker.io/library/nginx@sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31
repoTags:
- docker.io/library/nginx:latest
size: "56722276"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2394466"
- id: sha256:f51846a4fd28801f333d9a13e4a77a96bd52f06e587ba664c2914f015c38e5d1
repoDigests:
- k8s.gcr.io/kube-controller-manager@sha256:a7ed87380108a2d811f0d392a3fe87546c85bc366e0d1e024dfa74eb14468604
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.1
size: "30163119"
- id: sha256:b46c42588d5116766d0eb259ff372e7c1e3ecc41a842b0c18a8842083e34d62e
repoDigests:
- k8s.gcr.io/kube-proxy@sha256:e40f3a28721588affcf187f3f246d1e078157dabe274003eaa2957a83f7170c8
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.1
size: "39272869"
- id: sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests:
- k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db
repoTags:
- k8s.gcr.io/pause:3.6
size: "301773"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "72306"
- id: sha256:e46054869ca11ba1853c2d723a277a8228c2297ccefeb76a5da6b520b0a6cf7b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20211231094749-6736
size: "1737"
- id: sha256:c20987f18b130f9d144c9828df630417e2a9523148930dc3963e9d0dab302a76
repoDigests:
- docker.io/library/mysql@sha256:f2ad209efe9c67104167fc609cca6973c8422939491c9345270175a300419f94
repoTags:
- docker.io/library/mysql:5.7
size: "154858823"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "297686"
- id: sha256:cc44224bfe208a46fbc45471e8f9416f66b75d6307573e29634e7f42e27a9268
repoDigests:
- docker.io/library/nginx@sha256:eb05700fe7baa6890b74278e39b66b2ed1326831f9ec3ed4bdc6361a4ac2f333
repoTags:
- docker.io/library/nginx:alpine
size: "10181739"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20211231094749-6736
size: "10823156"
- id: sha256:7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:36d5b3f60e1a144cc5ada820910535074bdf5cf73fb70d1ff1681537eef4e172
repoTags:
- docker.io/kubernetesui/metrics-scraper:v1.0.7
size: "15029138"
- id: sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests:
- k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "13585107"
- id: sha256:b6d7abedde39968d56e9f53aaeea02a4fe6413497c4dedf091868eae09dcc320
repoDigests:
- k8s.gcr.io/kube-apiserver@sha256:f54681a71cce62cbc1b13ebb3dbf1d880f849112789811f98b6aebd2caa2f255
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.1
size: "32598867"
- id: sha256:71d575efe62835f4882115d409a676dd24102215eee650bf23b9cf42af0e7c05
repoDigests:
- k8s.gcr.io/kube-scheduler@sha256:8be4eb1593cf9ff2d91b44596633b7815a3753696031a1eb4273d1b39427fa8c
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.1
size: "15129850"
- id: sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb
repoDigests:
- docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c
repoTags:
- docker.io/kindest/kindnetd:v20210326-1e038dc5
size: "53960776"
- id: sha256:e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:ec27f462cf1946220f5a9ace416a84a57c18f98c777876a8054405d1428cc92e
repoTags:
- docker.io/kubernetesui/dashboard:v2.3.1
size: "66934416"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:294: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:294: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20211231094749-6736 ssh pgrep buildkitd: exit status 1 (450.561489ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:301: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 image build -t localhost/my-image:functional-20211231094749-6736 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:301: (dbg) Done: out/minikube-linux-amd64 -p functional-20211231094749-6736 image build -t localhost/my-image:functional-20211231094749-6736 testdata/build: (2.383902478s)
functional_test.go:309: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20211231094749-6736 image build -t localhost/my-image:functional-20211231094749-6736 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile:
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.4s

                                                
                                                
#6 [internal] load build context
#6 transferring context: 62B done
#6 DONE 0.0s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:50e44504ea4f19f141118a8a8868e6c5bb9856efa33f2183f5ccea7ac62aacc9
#4 resolve gcr.io/k8s-minikube/busybox@sha256:50e44504ea4f19f141118a8a8868e6c5bb9856efa33f2183f5ccea7ac62aacc9 0.0s done
#4 extracting sha256:3cb635b06aa273034d7080e0242e4b6628c59347d6ddefff019bfd82f45aa7d5 0.1s done
#4 DONE 0.1s

                                                
                                                
#5 [2/3] RUN true
#5 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.5s done
#8 exporting manifest sha256:c44c98cf27b8b976b1092a35a5edd1e1292685e15c9a43f7dd1933350495fe46 0.1s done
#8 exporting config sha256:c506ec17b9ee2f7a483e0c4eeafcf89b6b10a1f25a2cb5aba5e0cd1802e491fb 0.0s done
#8 naming to localhost/my-image:functional-20211231094749-6736 done
#8 DONE 0.6s
functional_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:328: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:328: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.130531561s)
functional_test.go:333: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20211231094749-6736
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (7.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:341: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211231094749-6736

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:341: (dbg) Done: out/minikube-linux-amd64 -p functional-20211231094749-6736 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211231094749-6736: (6.823694053s)
functional_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (7.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1957: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1957: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1957: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (8.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211231094749-6736

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p functional-20211231094749-6736 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211231094749-6736: (7.810900414s)
functional_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (8.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20211231094749-6736
functional_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211231094749-6736
functional_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p functional-20211231094749-6736 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211231094749-6736: (4.565052333s)
functional_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:366: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 image save gcr.io/google-containers/addon-resizer:functional-20211231094749-6736 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:366: (dbg) Done: out/minikube-linux-amd64 -p functional-20211231094749-6736 image save gcr.io/google-containers/addon-resizer:functional-20211231094749-6736 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (1.353710906s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:378: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 image rm gcr.io/google-containers/addon-resizer:functional-20211231094749-6736

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-20211231094749-6736 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar: (1.349539219s)
functional_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:405: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20211231094749-6736
functional_test.go:410: (dbg) Run:  out/minikube-linux-amd64 -p functional-20211231094749-6736 image save --daemon gcr.io/google-containers/addon-resizer:functional-20211231094749-6736
functional_test.go:410: (dbg) Done: out/minikube-linux-amd64 -p functional-20211231094749-6736 image save --daemon gcr.io/google-containers/addon-resizer:functional-20211231094749-6736: (1.230828373s)
functional_test.go:415: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20211231094749-6736
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20211231094749-6736
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20211231094749-6736
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20211231094749-6736
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (93.02s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:40: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-20211231095054-6736 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1231 09:51:03.483292    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
E1231 09:52:25.404434    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
ingress_addon_legacy_test.go:40: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20211231095054-6736 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m33.020647583s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (93.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20211231095054-6736 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:71: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20211231095054-6736 addons enable ingress --alsologtostderr -v=5: (11.419133609s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.42s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:80: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20211231095054-6736 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.45s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (47.01s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:163: (dbg) Run:  kubectl --context ingress-addon-legacy-20211231095054-6736 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:163: (dbg) Done: kubectl --context ingress-addon-legacy-20211231095054-6736 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (7.760207098s)
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20211231095054-6736 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context ingress-addon-legacy-20211231095054-6736 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [854b39fc-3910-4488-9ab6-3799b272c483] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:343: "nginx" [854b39fc-3910-4488-9ab6-3799b272c483] Running
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 7.006669571s
addons_test.go:213: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20211231095054-6736 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:237: (dbg) Run:  kubectl --context ingress-addon-legacy-20211231095054-6736 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20211231095054-6736 ip
addons_test.go:248: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20211231095054-6736 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:257: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20211231095054-6736 addons disable ingress-dns --alsologtostderr -v=1: (1.991818996s)
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20211231095054-6736 addons disable ingress --alsologtostderr -v=1
addons_test.go:262: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20211231095054-6736 addons disable ingress --alsologtostderr -v=1: (28.706164896s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (47.01s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.57s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20211231095329-6736 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20211231095329-6736 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (59.571470896s)
--- PASS: TestJSONOutput/start/Command (59.57s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.81s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20211231095329-6736 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.81s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20211231095329-6736 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (23.98s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20211231095329-6736 --output=json --user=testUser
E1231 09:54:41.557924    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20211231095329-6736 --output=json --user=testUser: (23.977870274s)
--- PASS: TestJSONOutput/stop/Command (23.98s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20211231095459-6736 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20211231095459-6736 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (77.72875ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cf3c70f9-563b-4d21-9584-9c2595c8858a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20211231095459-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fcaf2aa2-f8f4-4c95-9845-baa58a008d40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12739"}}
	{"specversion":"1.0","id":"2d958c3f-98c5-4ea9-957e-16a1780a7cb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8597233f-a7d9-4aab-95c8-8c85b02439fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig"}}
	{"specversion":"1.0","id":"bc7f3e63-ee12-4353-a774-20b190b50399","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube"}}
	{"specversion":"1.0","id":"9c8e6258-a9be-4c40-a482-e901d8a4055f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3f5c2766-5d2f-4d39-88f1-704b932b5bba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20211231095459-6736" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20211231095459-6736
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.5s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20211231095500-6736 --network=
E1231 09:55:09.245570    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
E1231 09:55:10.310020    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
E1231 09:55:10.315428    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
E1231 09:55:10.325897    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
E1231 09:55:10.346433    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
E1231 09:55:10.386929    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
E1231 09:55:10.467415    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
E1231 09:55:10.627934    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
E1231 09:55:10.948602    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
E1231 09:55:11.588823    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
E1231 09:55:12.869346    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
E1231 09:55:15.431192    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
E1231 09:55:20.551576    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
E1231 09:55:30.792412    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
kic_custom_network_test.go:58: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20211231095500-6736 --network=: (36.017762561s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20211231095500-6736" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20211231095500-6736
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20211231095500-6736: (2.440623248s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.50s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (30.73s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20211231095538-6736 --network=bridge
E1231 09:55:51.273610    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
kic_custom_network_test.go:58: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20211231095538-6736 --network=bridge: (28.43289496s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20211231095538-6736" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20211231095538-6736
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20211231095538-6736: (2.250835831s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (30.73s)

                                                
                                    
x
+
TestKicExistingNetwork (32.25s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:94: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20211231095609-6736 --network=existing-network
E1231 09:56:32.234309    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
kic_custom_network_test.go:94: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20211231095609-6736 --network=existing-network: (29.577570698s)
helpers_test.go:176: Cleaning up "existing-network-20211231095609-6736" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20211231095609-6736
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20211231095609-6736: (2.414325752s)
--- PASS: TestKicExistingNetwork (32.25s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20211231095641-6736 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20211231095641-6736 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.72291162s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20211231095641-6736 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20211231095641-6736 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20211231095641-6736 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.544243515s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20211231095641-6736 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (5.97s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:134: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20211231095641-6736 --alsologtostderr -v=5
pause_test.go:134: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20211231095641-6736 --alsologtostderr -v=5: (5.971205125s)
--- PASS: TestMountStart/serial/DeleteFirst (5.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20211231095641-6736 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:156: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20211231095641-6736
mount_start_test.go:156: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20211231095641-6736: (1.292961639s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (5.98s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:167: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20211231095641-6736
mount_start_test.go:167: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20211231095641-6736: (4.975723697s)
--- PASS: TestMountStart/serial/RestartStopped (5.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20211231095641-6736 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (108.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20211231095711-6736 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1231 09:57:39.269165    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
E1231 09:57:39.274590    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
E1231 09:57:39.284963    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
E1231 09:57:39.305344    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
E1231 09:57:39.345676    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
E1231 09:57:39.426034    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
E1231 09:57:39.586914    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
E1231 09:57:39.907297    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
E1231 09:57:40.548399    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
E1231 09:57:41.828859    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
E1231 09:57:44.390691    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
E1231 09:57:49.511286    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
E1231 09:57:54.154570    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
E1231 09:57:59.751743    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
E1231 09:58:20.232306    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20211231095711-6736 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m47.470168819s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (108.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211231095711-6736 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:491: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211231095711-6736 -- rollout status deployment/busybox
E1231 09:59:01.192525    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
multinode_test.go:491: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20211231095711-6736 -- rollout status deployment/busybox: (1.693297068s)
multinode_test.go:497: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211231095711-6736 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211231095711-6736 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:517: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211231095711-6736 -- exec busybox-7978565885-f7k7m -- nslookup kubernetes.io
multinode_test.go:517: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211231095711-6736 -- exec busybox-7978565885-zjvnc -- nslookup kubernetes.io
multinode_test.go:527: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211231095711-6736 -- exec busybox-7978565885-f7k7m -- nslookup kubernetes.default
multinode_test.go:527: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211231095711-6736 -- exec busybox-7978565885-zjvnc -- nslookup kubernetes.default
multinode_test.go:535: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211231095711-6736 -- exec busybox-7978565885-f7k7m -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:535: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211231095711-6736 -- exec busybox-7978565885-zjvnc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.54s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:545: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211231095711-6736 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:553: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211231095711-6736 -- exec busybox-7978565885-f7k7m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:561: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211231095711-6736 -- exec busybox-7978565885-f7k7m -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:553: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211231095711-6736 -- exec busybox-7978565885-zjvnc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:561: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20211231095711-6736 -- exec busybox-7978565885-zjvnc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (44.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20211231095711-6736 -v 3 --alsologtostderr
E1231 09:59:41.557734    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20211231095711-6736 -v 3 --alsologtostderr: (43.761676131s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (44.68s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (13.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 status --output json --alsologtostderr
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 cp testdata/cp-test.txt multinode-20211231095711-6736:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 ssh -n multinode-20211231095711-6736 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 cp multinode-20211231095711-6736:/home/docker/cp-test.txt /tmp/mk_cp_test3501548483/cp-test_multinode-20211231095711-6736.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 ssh -n multinode-20211231095711-6736 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 cp multinode-20211231095711-6736:/home/docker/cp-test.txt multinode-20211231095711-6736-m02:/home/docker/cp-test_multinode-20211231095711-6736_multinode-20211231095711-6736-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 ssh -n multinode-20211231095711-6736 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 ssh -n multinode-20211231095711-6736-m02 "sudo cat /home/docker/cp-test_multinode-20211231095711-6736_multinode-20211231095711-6736-m02.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 cp multinode-20211231095711-6736:/home/docker/cp-test.txt multinode-20211231095711-6736-m03:/home/docker/cp-test_multinode-20211231095711-6736_multinode-20211231095711-6736-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 ssh -n multinode-20211231095711-6736 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 ssh -n multinode-20211231095711-6736-m03 "sudo cat /home/docker/cp-test_multinode-20211231095711-6736_multinode-20211231095711-6736-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 cp testdata/cp-test.txt multinode-20211231095711-6736-m02:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 ssh -n multinode-20211231095711-6736-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 cp multinode-20211231095711-6736-m02:/home/docker/cp-test.txt /tmp/mk_cp_test3501548483/cp-test_multinode-20211231095711-6736-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 ssh -n multinode-20211231095711-6736-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 cp multinode-20211231095711-6736-m02:/home/docker/cp-test.txt multinode-20211231095711-6736:/home/docker/cp-test_multinode-20211231095711-6736-m02_multinode-20211231095711-6736.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 ssh -n multinode-20211231095711-6736-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 ssh -n multinode-20211231095711-6736 "sudo cat /home/docker/cp-test_multinode-20211231095711-6736-m02_multinode-20211231095711-6736.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 cp multinode-20211231095711-6736-m02:/home/docker/cp-test.txt multinode-20211231095711-6736-m03:/home/docker/cp-test_multinode-20211231095711-6736-m02_multinode-20211231095711-6736-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 ssh -n multinode-20211231095711-6736-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 ssh -n multinode-20211231095711-6736-m03 "sudo cat /home/docker/cp-test_multinode-20211231095711-6736-m02_multinode-20211231095711-6736-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 cp testdata/cp-test.txt multinode-20211231095711-6736-m03:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 ssh -n multinode-20211231095711-6736-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 cp multinode-20211231095711-6736-m03:/home/docker/cp-test.txt /tmp/mk_cp_test3501548483/cp-test_multinode-20211231095711-6736-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 ssh -n multinode-20211231095711-6736-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 cp multinode-20211231095711-6736-m03:/home/docker/cp-test.txt multinode-20211231095711-6736:/home/docker/cp-test_multinode-20211231095711-6736-m03_multinode-20211231095711-6736.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 ssh -n multinode-20211231095711-6736-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 ssh -n multinode-20211231095711-6736 "sudo cat /home/docker/cp-test_multinode-20211231095711-6736-m03_multinode-20211231095711-6736.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 cp multinode-20211231095711-6736-m03:/home/docker/cp-test.txt multinode-20211231095711-6736-m02:/home/docker/cp-test_multinode-20211231095711-6736-m03_multinode-20211231095711-6736-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 ssh -n multinode-20211231095711-6736-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 ssh -n multinode-20211231095711-6736-m02 "sudo cat /home/docker/cp-test_multinode-20211231095711-6736-m03_multinode-20211231095711-6736-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (13.41s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (21.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:215: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 node stop m03
E1231 10:00:10.309774    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
E1231 10:00:23.112762    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
multinode_test.go:215: (dbg) Done: out/minikube-linux-amd64 -p multinode-20211231095711-6736 node stop m03: (20.250341357s)
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 status
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20211231095711-6736 status: exit status 7 (703.773346ms)

                                                
                                                
-- stdout --
	multinode-20211231095711-6736
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20211231095711-6736-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20211231095711-6736-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 status --alsologtostderr
multinode_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20211231095711-6736 status --alsologtostderr: exit status 7 (686.169132ms)

                                                
                                                
-- stdout --
	multinode-20211231095711-6736
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20211231095711-6736-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20211231095711-6736-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1231 10:00:24.088931   82870 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:00:24.089065   82870 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:00:24.089077   82870 out.go:310] Setting ErrFile to fd 2...
	I1231 10:00:24.089083   82870 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:00:24.089213   82870 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:00:24.089404   82870 out.go:304] Setting JSON to false
	I1231 10:00:24.089425   82870 mustload.go:65] Loading cluster: multinode-20211231095711-6736
	I1231 10:00:24.089872   82870 config.go:176] Loaded profile config "multinode-20211231095711-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:00:24.089898   82870 status.go:253] checking status of multinode-20211231095711-6736 ...
	I1231 10:00:24.090326   82870 cli_runner.go:133] Run: docker container inspect multinode-20211231095711-6736 --format={{.State.Status}}
	I1231 10:00:24.125615   82870 status.go:328] multinode-20211231095711-6736 host status = "Running" (err=<nil>)
	I1231 10:00:24.125668   82870 host.go:66] Checking if "multinode-20211231095711-6736" exists ...
	I1231 10:00:24.126032   82870 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20211231095711-6736
	I1231 10:00:24.162130   82870 host.go:66] Checking if "multinode-20211231095711-6736" exists ...
	I1231 10:00:24.162412   82870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:00:24.162460   82870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211231095711-6736
	I1231 10:00:24.200836   82870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49212 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/multinode-20211231095711-6736/id_rsa Username:docker}
	I1231 10:00:24.297500   82870 ssh_runner.go:195] Run: systemctl --version
	I1231 10:00:24.301772   82870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:00:24.312543   82870 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:00:24.419800   82870 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-31 10:00:24.346338914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:00:24.421098   82870 kubeconfig.go:92] found "multinode-20211231095711-6736" server: "https://192.168.49.2:8443"
	I1231 10:00:24.421139   82870 api_server.go:165] Checking apiserver status ...
	I1231 10:00:24.421174   82870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1231 10:00:24.440409   82870 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1180/cgroup
	I1231 10:00:24.448844   82870 api_server.go:181] apiserver freezer: "11:freezer:/docker/26e15693ba5fc91d53d4ae5fa23494e764da542b3a51c6706b35b2970396e7e3/kubepods/burstable/pod6442a5bfde9c6a3edac1b3c41eedc49d/6c2167e4c4ed5898efe3efd6142f90249a2e0d74df779083a6352c3acd6226cf"
	I1231 10:00:24.448915   82870 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/26e15693ba5fc91d53d4ae5fa23494e764da542b3a51c6706b35b2970396e7e3/kubepods/burstable/pod6442a5bfde9c6a3edac1b3c41eedc49d/6c2167e4c4ed5898efe3efd6142f90249a2e0d74df779083a6352c3acd6226cf/freezer.state
	I1231 10:00:24.456476   82870 api_server.go:203] freezer state: "THAWED"
	I1231 10:00:24.456509   82870 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1231 10:00:24.461068   82870 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1231 10:00:24.461098   82870 status.go:419] multinode-20211231095711-6736 apiserver status = Running (err=<nil>)
	I1231 10:00:24.461106   82870 status.go:255] multinode-20211231095711-6736 status: &{Name:multinode-20211231095711-6736 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1231 10:00:24.461120   82870 status.go:253] checking status of multinode-20211231095711-6736-m02 ...
	I1231 10:00:24.461377   82870 cli_runner.go:133] Run: docker container inspect multinode-20211231095711-6736-m02 --format={{.State.Status}}
	I1231 10:00:24.495505   82870 status.go:328] multinode-20211231095711-6736-m02 host status = "Running" (err=<nil>)
	I1231 10:00:24.495533   82870 host.go:66] Checking if "multinode-20211231095711-6736-m02" exists ...
	I1231 10:00:24.495837   82870 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20211231095711-6736-m02
	I1231 10:00:24.530716   82870 host.go:66] Checking if "multinode-20211231095711-6736-m02" exists ...
	I1231 10:00:24.531001   82870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1231 10:00:24.531052   82870 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211231095711-6736-m02
	I1231 10:00:24.567436   82870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49217 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/machines/multinode-20211231095711-6736-m02/id_rsa Username:docker}
	I1231 10:00:24.662023   82870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1231 10:00:24.672818   82870 status.go:255] multinode-20211231095711-6736-m02 status: &{Name:multinode-20211231095711-6736-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1231 10:00:24.672899   82870 status.go:253] checking status of multinode-20211231095711-6736-m03 ...
	I1231 10:00:24.673147   82870 cli_runner.go:133] Run: docker container inspect multinode-20211231095711-6736-m03 --format={{.State.Status}}
	I1231 10:00:24.710561   82870 status.go:328] multinode-20211231095711-6736-m03 host status = "Stopped" (err=<nil>)
	I1231 10:00:24.710590   82870 status.go:341] host is not running, skipping remaining checks
	I1231 10:00:24.710596   82870 status.go:255] multinode-20211231095711-6736-m03 status: &{Name:multinode-20211231095711-6736-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (21.64s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:249: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 node start m03 --alsologtostderr
E1231 10:00:37.996454    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
multinode_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p multinode-20211231095711-6736 node start m03 --alsologtostderr: (35.844050247s)
multinode_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 status
multinode_test.go:280: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (193.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20211231095711-6736
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20211231095711-6736
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20211231095711-6736: (1m0.38188825s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20211231095711-6736 --wait=true -v=8 --alsologtostderr
E1231 10:02:39.268932    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
E1231 10:03:06.953756    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
multinode_test.go:300: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20211231095711-6736 --wait=true -v=8 --alsologtostderr: (2m12.949981027s)
multinode_test.go:305: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20211231095711-6736
--- PASS: TestMultiNode/serial/RestartKeepsNodes (193.46s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (24.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:399: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 node delete m03
multinode_test.go:399: (dbg) Done: out/minikube-linux-amd64 -p multinode-20211231095711-6736 node delete m03: (23.789552223s)
multinode_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 status --alsologtostderr
multinode_test.go:419: (dbg) Run:  docker volume ls
multinode_test.go:429: (dbg) Run:  kubectl get nodes
multinode_test.go:437: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (24.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (40.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:319: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 stop
E1231 10:04:41.557319    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
E1231 10:05:10.309890    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
multinode_test.go:319: (dbg) Done: out/minikube-linux-amd64 -p multinode-20211231095711-6736 stop: (40.390358797s)
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 status
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20211231095711-6736 status: exit status 7 (134.771331ms)

                                                
                                                
-- stdout --
	multinode-20211231095711-6736
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20211231095711-6736-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 status --alsologtostderr
multinode_test.go:332: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20211231095711-6736 status --alsologtostderr: exit status 7 (145.478032ms)

                                                
                                                
-- stdout --
	multinode-20211231095711-6736
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20211231095711-6736-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1231 10:05:20.341779   93942 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:05:20.341870   93942 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:05:20.341874   93942 out.go:310] Setting ErrFile to fd 2...
	I1231 10:05:20.341877   93942 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:05:20.342019   93942 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:05:20.342252   93942 out.go:304] Setting JSON to false
	I1231 10:05:20.342276   93942 mustload.go:65] Loading cluster: multinode-20211231095711-6736
	I1231 10:05:20.342683   93942 config.go:176] Loaded profile config "multinode-20211231095711-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:05:20.342703   93942 status.go:253] checking status of multinode-20211231095711-6736 ...
	I1231 10:05:20.343129   93942 cli_runner.go:133] Run: docker container inspect multinode-20211231095711-6736 --format={{.State.Status}}
	I1231 10:05:20.379912   93942 status.go:328] multinode-20211231095711-6736 host status = "Stopped" (err=<nil>)
	I1231 10:05:20.379947   93942 status.go:341] host is not running, skipping remaining checks
	I1231 10:05:20.379955   93942 status.go:255] multinode-20211231095711-6736 status: &{Name:multinode-20211231095711-6736 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1231 10:05:20.379980   93942 status.go:253] checking status of multinode-20211231095711-6736-m02 ...
	I1231 10:05:20.380266   93942 cli_runner.go:133] Run: docker container inspect multinode-20211231095711-6736-m02 --format={{.State.Status}}
	I1231 10:05:20.415419   93942 status.go:328] multinode-20211231095711-6736-m02 host status = "Stopped" (err=<nil>)
	I1231 10:05:20.415444   93942 status.go:341] host is not running, skipping remaining checks
	I1231 10:05:20.415450   93942 status.go:255] multinode-20211231095711-6736-m02 status: &{Name:multinode-20211231095711-6736-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (40.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (98.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:349: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:359: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20211231095711-6736 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1231 10:06:04.606110    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
multinode_test.go:359: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20211231095711-6736 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m37.52868942s)
multinode_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20211231095711-6736 status --alsologtostderr
multinode_test.go:379: (dbg) Run:  kubectl get nodes
multinode_test.go:387: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (98.35s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:448: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20211231095711-6736
multinode_test.go:457: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20211231095711-6736-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:457: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20211231095711-6736-m02 --driver=docker  --container-runtime=containerd: exit status 14 (91.388724ms)

                                                
                                                
-- stdout --
	* [multinode-20211231095711-6736-m02] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20211231095711-6736-m02' is duplicated with machine name 'multinode-20211231095711-6736-m02' in profile 'multinode-20211231095711-6736'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20211231095711-6736-m03 --driver=docker  --container-runtime=containerd
E1231 10:07:39.271337    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
multinode_test.go:465: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20211231095711-6736-m03 --driver=docker  --container-runtime=containerd: (44.438481275s)
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20211231095711-6736
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20211231095711-6736: exit status 80 (377.063433ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20211231095711-6736
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20211231095711-6736-m03 already exists in multinode-20211231095711-6736-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:477: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20211231095711-6736-m03
multinode_test.go:477: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20211231095711-6736-m03: (2.770760686s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.75s)

                                                
                                    
x
+
TestPreload (158.65s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20211231100751-6736 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20211231100751-6736 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0: (1m26.243324066s)
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20211231100751-6736 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:72: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20211231100751-6736 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3
E1231 10:09:41.557279    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
E1231 10:10:10.310529    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
preload_test.go:72: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20211231100751-6736 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3: (1m8.256044262s)
preload_test.go:81: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20211231100751-6736 -- sudo crictl image ls
helpers_test.go:176: Cleaning up "test-preload-20211231100751-6736" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20211231100751-6736
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20211231100751-6736: (2.811082939s)
--- PASS: TestPreload (158.65s)

                                                
                                    
x
+
TestScheduledStopUnix (121.12s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:129: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20211231101030-6736 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:129: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20211231101030-6736 --memory=2048 --driver=docker  --container-runtime=containerd: (43.828247765s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20211231101030-6736 --schedule 5m
scheduled_stop_test.go:192: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20211231101030-6736 -n scheduled-stop-20211231101030-6736
scheduled_stop_test.go:170: signal error was:  <nil>
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20211231101030-6736 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20211231101030-6736 --cancel-scheduled
E1231 10:11:33.358486    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20211231101030-6736 -n scheduled-stop-20211231101030-6736
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20211231101030-6736
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20211231101030-6736 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20211231101030-6736
scheduled_stop_test.go:206: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20211231101030-6736: exit status 7 (99.305982ms)

                                                
                                                
-- stdout --
	scheduled-stop-20211231101030-6736
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20211231101030-6736 -n scheduled-stop-20211231101030-6736
scheduled_stop_test.go:177: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20211231101030-6736 -n scheduled-stop-20211231101030-6736: exit status 7 (99.449999ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20211231101030-6736" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20211231101030-6736
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20211231101030-6736: (5.352682165s)
--- PASS: TestScheduledStopUnix (121.12s)

                                                
                                    
x
+
TestInsufficientStorage (19.85s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20211231101231-6736 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
E1231 10:12:39.272020    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
status_test.go:51: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20211231101231-6736 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (12.834488069s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7e34d2be-0209-425c-808a-4bb7b4491641","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20211231101231-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3b341c15-b312-4adb-adc5-982807c61539","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=12739"}}
	{"specversion":"1.0","id":"71aef193-acb3-4355-8332-2f9924b8e3bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c738b947-b93f-4acc-86d2-c2223c743908","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig"}}
	{"specversion":"1.0","id":"a3b583cd-9c7f-48ad-bcfd-6fdc980c3c93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube"}}
	{"specversion":"1.0","id":"1479c160-eea5-4447-99be-f6cc6bdd00bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"fb673b0a-38d4-4fff-a42d-1d9f0296e258","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"48ce6653-d036-4866-bdd3-6f60d3ab2fa4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fcc3c7b3-1ed5-4237-b14b-a9a86de2674d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Your cgroup does not allow setting memory."}}
	{"specversion":"1.0","id":"41a1b944-e9f9-4914-b085-2d03fc9d6f78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"}}
	{"specversion":"1.0","id":"b56d88f9-6006-4ff4-9617-d8fa89bc7b7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20211231101231-6736 in cluster insufficient-storage-20211231101231-6736","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7a10629e-0d2a-4143-be05-9ba19b1902a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2da4c790-6a6c-49e6-b008-7cd8fd50355a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"fed3da1a-fd44-42f0-91ef-adcfbea8e59a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20211231101231-6736 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20211231101231-6736 --output=json --layout=cluster: exit status 7 (392.365603ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20211231101231-6736","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.24.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20211231101231-6736","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1231 10:12:44.350692  115712 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20211231101231-6736" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig

                                                
                                                
** /stderr **
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20211231101231-6736 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20211231101231-6736 --output=json --layout=cluster: exit status 7 (398.058841ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20211231101231-6736","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.24.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20211231101231-6736","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1231 10:12:44.750633  115811 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20211231101231-6736" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	E1231 10:12:44.764112  115811 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/insufficient-storage-20211231101231-6736/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20211231101231-6736" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20211231101231-6736
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20211231101231-6736: (6.228926406s)
--- PASS: TestInsufficientStorage (19.85s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (158.95s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.2182675902.exe start -p running-upgrade-20211231101758-6736 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.16.0.2182675902.exe start -p running-upgrade-20211231101758-6736 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (57.250536074s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20211231101758-6736 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20211231101758-6736 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m37.864281833s)
helpers_test.go:176: Cleaning up "running-upgrade-20211231101758-6736" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20211231101758-6736

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20211231101758-6736: (3.367286799s)
--- PASS: TestRunningBinaryUpgrade (158.95s)

                                                
                                    
x
+
TestKubernetesUpgrade (215.59s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20211231101536-6736 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20211231101536-6736 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m6.571119303s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20211231101536-6736

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20211231101536-6736: (20.433153224s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20211231101536-6736 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20211231101536-6736 status --format={{.Host}}: exit status 7 (125.285378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20211231101536-6736 --memory=2200 --kubernetes-version=v1.23.2-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1231 10:17:39.268887    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20211231101536-6736 --memory=2200 --kubernetes-version=v1.23.2-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m21.920439803s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20211231101536-6736 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20211231101536-6736 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20211231101536-6736 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (105.874205ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20211231101536-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.2-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20211231101536-6736
	    minikube start -p kubernetes-upgrade-20211231101536-6736 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20211231101536-67362 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.2-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20211231101536-6736 --kubernetes-version=v1.23.2-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20211231101536-6736 --memory=2200 --kubernetes-version=v1.23.2-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20211231101536-6736 --memory=2200 --kubernetes-version=v1.23.2-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.693497237s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20211231101536-6736" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20211231101536-6736
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20211231101536-6736: (9.671786635s)
--- PASS: TestKubernetesUpgrade (215.59s)

                                                
                                    
x
+
TestMissingContainerUpgrade (147.99s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.600748375.exe start -p missing-upgrade-20211231101530-6736 --memory=2200 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.600748375.exe start -p missing-upgrade-20211231101530-6736 --memory=2200 --driver=docker  --container-runtime=containerd: (1m3.517285763s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20211231101530-6736

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20211231101530-6736: (10.3200437s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20211231101530-6736
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20211231101530-6736 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20211231101530-6736 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m7.510741908s)
helpers_test.go:176: Cleaning up "missing-upgrade-20211231101530-6736" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20211231101530-6736
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20211231101530-6736: (5.968055611s)
--- PASS: TestMissingContainerUpgrade (147.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:84: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20211231101250-6736 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:84: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20211231101250-6736 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (121.11942ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20211231101250-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (70.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20211231101250-6736 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20211231101250-6736 --driver=docker  --container-runtime=containerd: (1m9.888403563s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20211231101250-6736 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (70.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (154.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.16.0.3413174078.exe start -p stopped-upgrade-20211231101250-6736 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.16.0.3413174078.exe start -p stopped-upgrade-20211231101250-6736 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m3.090625554s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.16.0.3413174078.exe -p stopped-upgrade-20211231101250-6736 stop
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.16.0.3413174078.exe -p stopped-upgrade-20211231101250-6736 stop: (1.440337139s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20211231101250-6736 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20211231101250-6736 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m30.337237344s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (154.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:113: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20211231101250-6736 --no-kubernetes --driver=docker  --container-runtime=containerd
E1231 10:14:02.314433    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
no_kubernetes_test.go:113: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20211231101250-6736 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.77525875s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20211231101250-6736 status -o json
no_kubernetes_test.go:201: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-20211231101250-6736 status -o json: exit status 2 (497.158662ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20211231101250-6736","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:125: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-20211231101250-6736

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:125: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20211231101250-6736: (2.842161767s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (1.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:214: (dbg) Run:  out/minikube-linux-amd64 start -p false-20211231101407-6736 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20211231101407-6736 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (267.399135ms)

                                                
                                                
-- stdout --
	* [false-20211231101407-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=12739
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1231 10:14:07.559902  128317 out.go:297] Setting OutFile to fd 1 ...
	I1231 10:14:07.560007  128317 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:14:07.560012  128317 out.go:310] Setting ErrFile to fd 2...
	I1231 10:14:07.560018  128317 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1231 10:14:07.560157  128317 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/bin
	I1231 10:14:07.560643  128317 out.go:304] Setting JSON to false
	I1231 10:14:07.562172  128317 start.go:112] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3402,"bootTime":1640942245,"procs":449,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.11.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1231 10:14:07.562287  128317 start.go:122] virtualization: kvm guest
	I1231 10:14:07.566199  128317 out.go:176] * [false-20211231101407-6736] minikube v1.24.0 on Ubuntu 20.04 (kvm/amd64)
	I1231 10:14:07.568586  128317 out.go:176]   - MINIKUBE_LOCATION=12739
	I1231 10:14:07.566493  128317 notify.go:174] Checking for updates...
	I1231 10:14:07.571193  128317 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1231 10:14:07.573884  128317 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/kubeconfig
	I1231 10:14:07.578542  128317 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube
	I1231 10:14:07.581054  128317 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1231 10:14:07.581636  128317 config.go:176] Loaded profile config "NoKubernetes-20211231101250-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1231 10:14:07.581799  128317 config.go:176] Loaded profile config "offline-containerd-20211231101250-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.1
	I1231 10:14:07.581871  128317 config.go:176] Loaded profile config "stopped-upgrade-20211231101250-6736": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1231 10:14:07.581925  128317 driver.go:344] Setting default libvirt URI to qemu:///system
	I1231 10:14:07.632520  128317 docker.go:132] docker version: linux-20.10.12
	I1231 10:14:07.632620  128317 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I1231 10:14:07.738836  128317 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:14 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:50 SystemTime:2021-12-31 10:14:07.667954599 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.11.0-1023-gcp OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33668894720 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.12.0]] Warnings:<nil>}}
	I1231 10:14:07.739005  128317 docker.go:237] overlay module found
	I1231 10:14:07.743087  128317 out.go:176] * Using the docker driver based on user configuration
	I1231 10:14:07.743123  128317 start.go:280] selected driver: docker
	I1231 10:14:07.743136  128317 start.go:795] validating driver "docker" against <nil>
	I1231 10:14:07.743159  128317 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1231 10:14:07.743187  128317 start.go:1498] auto setting extra-config to "kubelet.global-housekeeping-interval=60m".
	I1231 10:14:07.743197  128317 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
	W1231 10:14:07.743243  128317 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W1231 10:14:07.743271  128317 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I1231 10:14:07.745161  128317 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I1231 10:14:07.747730  128317 out.go:176] 
	W1231 10:14:07.747874  128317 out.go:241] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1231 10:14:07.749882  128317 out.go:176] 

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "false-20211231101407-6736" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20211231101407-6736
--- PASS: TestNetworkPlugins/group/false (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20211231101250-6736 --no-kubernetes --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20211231101250-6736 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.576340928s)
--- PASS: TestNoKubernetes/serial/Start (4.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20211231101250-6736 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20211231101250-6736 "sudo systemctl is-active --quiet service kubelet": exit status 1 (464.431189ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:170: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:180: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (5.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:159: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-20211231101250-6736
no_kubernetes_test.go:159: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20211231101250-6736: (5.886285474s)
--- PASS: TestNoKubernetes/serial/Stop (5.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:192: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20211231101250-6736 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:192: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20211231101250-6736 --driver=docker  --container-runtime=containerd: (6.536043513s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20211231101250-6736 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20211231101250-6736 "sudo systemctl is-active --quiet service kubelet": exit status 1 (447.89707ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20211231101250-6736
version_upgrade_test.go:213: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-20211231101250-6736: (1.004409032s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.00s)

                                                
                                    
x
+
TestPause/serial/Start (67.76s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:82: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20211231101829-6736 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:82: (dbg) Done: out/minikube-linux-amd64 start -p pause-20211231101829-6736 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m7.760258583s)
--- PASS: TestPause/serial/Start (67.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (82.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20211231101406-6736 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p auto-20211231101406-6736 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd: (1m22.848150286s)
--- PASS: TestNetworkPlugins/group/auto/Start (82.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (83.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20211231101408-6736 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20211231101408-6736 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=containerd: (1m23.015770043s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (83.02s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (16.49s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:94: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20211231101829-6736 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1231 10:19:41.557441    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
pause_test.go:94: (dbg) Done: out/minikube-linux-amd64 start -p pause-20211231101829-6736 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (16.47537786s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (16.49s)

                                                
                                    
x
+
TestPause/serial/Pause (1.03s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:112: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20211231101829-6736 --alsologtostderr -v=5
pause_test.go:112: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20211231101829-6736 --alsologtostderr -v=5: (1.027689872s)
--- PASS: TestPause/serial/Pause (1.03s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.56s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20211231101829-6736 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20211231101829-6736 --output=json --layout=cluster: exit status 2 (563.191763ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20211231101829-6736","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.24.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20211231101829-6736","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.56s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.03s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:123: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20211231101829-6736 --alsologtostderr -v=5
pause_test.go:123: (dbg) Done: out/minikube-linux-amd64 unpause -p pause-20211231101829-6736 --alsologtostderr -v=5: (1.032297813s)
--- PASS: TestPause/serial/Unpause (1.03s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (5.6s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:112: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20211231101829-6736 --alsologtostderr -v=5
pause_test.go:112: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20211231101829-6736 --alsologtostderr -v=5: (5.602517016s)
--- PASS: TestPause/serial/PauseAgain (5.60s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.22s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:134: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20211231101829-6736 --alsologtostderr -v=5
pause_test.go:134: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20211231101829-6736 --alsologtostderr -v=5: (4.22229938s)
--- PASS: TestPause/serial/DeletePaused (4.22s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.97s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:144: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:170: (dbg) Run:  docker ps -a
pause_test.go:175: (dbg) Run:  docker volume inspect pause-20211231101829-6736
pause_test.go:175: (dbg) Non-zero exit: docker volume inspect pause-20211231101829-6736: exit status 1 (36.086502ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20211231101829-6736

                                                
                                                
** /stderr **
pause_test.go:180: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (111.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20211231101408-6736 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd
E1231 10:20:10.310228    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/functional-20211231094749-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20211231101408-6736 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd: (1m51.240652714s)
--- PASS: TestNetworkPlugins/group/cilium/Start (111.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20211231101406-6736 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context auto-20211231101406-6736 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-7777g" [74646521-6777-4769-a221-06570f4106ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-7777g" [74646521-6777-4769-a221-06570f4106ff] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.009024051s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20211231101406-6736 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:182: (dbg) Run:  kubectl --context auto-20211231101406-6736 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:232: (dbg) Run:  kubectl --context auto-20211231101406-6736 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20211231101408-6736 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (10.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context custom-weave-20211231101408-6736 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-8g59j" [16d1c0ab-fff2-42cd-9c0e-076af73db883] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/NetCatPod
helpers_test.go:343: "netcat-668db85669-8g59j" [16d1c0ab-fff2-42cd-9c0e-076af73db883] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 10.016656173s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (10.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (331.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20211231101406-6736 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20211231101406-6736 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (5m31.395010511s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (331.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-zjg9q" [74e33ab5-6d28-4175-994c-6f95bce841f0] Running
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.01473277s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20211231101408-6736 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (10.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context cilium-20211231101408-6736 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context cilium-20211231101408-6736 replace --force -f testdata/netcat-deployment.yaml: (1.091672762s)
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-ppj8r" [1d08e99c-0e0c-418a-870a-616e63fdc8bc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-ppj8r" [1d08e99c-0e0c-418a-870a-616e63fdc8bc] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 9.014156411s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (10.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:163: (dbg) Run:  kubectl --context cilium-20211231101408-6736 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:182: (dbg) Run:  kubectl --context cilium-20211231101408-6736 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:232: (dbg) Run:  kubectl --context cilium-20211231101408-6736 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (69.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20211231101406-6736 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd
E1231 10:22:39.269579    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
E1231 10:22:44.606679    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
net_test.go:99: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20211231101406-6736 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd: (1m9.848614187s)
--- PASS: TestNetworkPlugins/group/bridge/Start (69.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20211231101406-6736 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context bridge-20211231101406-6736 replace --force -f testdata/netcat-deployment.yaml
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-2mxrx" [67f0fdf8-7aa1-47a7-979a-a2b0ffa71259] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-2mxrx" [67f0fdf8-7aa1-47a7-979a-a2b0ffa71259] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.006458387s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20211231101406-6736 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context enable-default-cni-20211231101406-6736 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context enable-default-cni-20211231101406-6736 replace --force -f testdata/netcat-deployment.yaml: (4.062616683s)
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-gwg66" [0cebb362-a6c0-419f-a69d-a366b4452f92] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-gwg66" [0cebb362-a6c0-419f-a69d-a366b4452f92] Running
E1231 10:26:17.718834    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/custom-weave-20211231101408-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.008408821s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (75.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20211231102928-6736 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.2-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20211231102928-6736 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.2-rc.0: (1m15.168121326s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (75.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20211231102928-6736 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [942e4e1d-82d3-41a6-a154-349393cda983] Pending
helpers_test.go:343: "busybox" [942e4e1d-82d3-41a6-a154-349393cda983] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [942e4e1d-82d3-41a6-a154-349393cda983] Running
E1231 10:30:49.789526    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/auto-20211231101406-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.017064354s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20211231102928-6736 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20211231102928-6736 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context no-preload-20211231102928-6736 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20211231102928-6736 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20211231102928-6736 --alsologtostderr -v=3: (20.525415854s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (20.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20211231102928-6736 -n no-preload-20211231102928-6736
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20211231102928-6736 -n no-preload-20211231102928-6736: exit status 7 (106.703815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20211231102928-6736 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (58.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20211231102928-6736 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.2-rc.0
E1231 10:31:59.313064    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20211231102928-6736 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.2-rc.0: (57.813151606s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20211231102928-6736 -n no-preload-20211231102928-6736
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (58.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-w88f9" [be6b3ff1-a370-4339-aacb-607043119beb] Running
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012944375s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-w88f9" [be6b3ff1-a370-4339-aacb-607043119beb] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007265991s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context no-preload-20211231102928-6736 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20211231102928-6736 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20211231102928-6736 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20211231102928-6736 -n no-preload-20211231102928-6736

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20211231102928-6736 -n no-preload-20211231102928-6736: exit status 2 (482.583828ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20211231102928-6736 -n no-preload-20211231102928-6736
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20211231102928-6736 -n no-preload-20211231102928-6736: exit status 2 (476.201668ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20211231102928-6736 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20211231102928-6736 -n no-preload-20211231102928-6736

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20211231102928-6736 -n no-preload-20211231102928-6736
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (66.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20211231103230-6736 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.2-rc.0
E1231 10:32:39.269640    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/ingress-addon-legacy-20211231095054-6736/client.crt: no such file or directory
E1231 10:33:29.444564    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
E1231 10:33:29.449973    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
E1231 10:33:29.460406    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
E1231 10:33:29.480898    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
E1231 10:33:29.521233    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
E1231 10:33:29.602013    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
E1231 10:33:29.762472    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
E1231 10:33:30.082896    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
E1231 10:33:30.723098    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
E1231 10:33:32.004057    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
E1231 10:33:34.564318    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
start_stop_delete_test.go:171: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20211231103230-6736 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.2-rc.0: (1m6.654805404s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (66.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20211231103230-6736 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:196: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (20.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20211231103230-6736 --alsologtostderr -v=3
E1231 10:33:39.685131    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
E1231 10:33:49.925646    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20211231103230-6736 --alsologtostderr -v=3: (20.350154227s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (20.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20211231103230-6736 -n newest-cni-20211231103230-6736
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20211231103230-6736 -n newest-cni-20211231103230-6736: exit status 7 (102.931645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20211231103230-6736 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (53.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20211231103230-6736 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.2-rc.0
E1231 10:34:10.405853    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
E1231 10:34:41.556919    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
E1231 10:34:51.366636    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/bridge-20211231101406-6736/client.crt: no such file or directory
start_stop_delete_test.go:241: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20211231103230-6736 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.2-rc.0: (52.866992865s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20211231103230-6736 -n newest-cni-20211231103230-6736
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (53.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:269: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20211231103230-6736 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20211231103230-6736 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20211231103230-6736 -n newest-cni-20211231103230-6736

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20211231103230-6736 -n newest-cni-20211231103230-6736: exit status 2 (512.520539ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20211231103230-6736 -n newest-cni-20211231103230-6736
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20211231103230-6736 -n newest-cni-20211231103230-6736: exit status 2 (508.618626ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20211231103230-6736 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20211231103230-6736 -n newest-cni-20211231103230-6736

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20211231103230-6736 -n newest-cni-20211231103230-6736
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20211231102602-6736 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context old-k8s-version-20211231102602-6736 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20211231102602-6736 --alsologtostderr -v=3
E1231 10:39:24.607406    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/addons-20211231094216-6736/client.crt: no such file or directory
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20211231102602-6736 --alsologtostderr -v=3: (20.397134639s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20211231102602-6736 -n old-k8s-version-20211231102602-6736
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20211231102602-6736 -n old-k8s-version-20211231102602-6736: exit status 7 (107.105489ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20211231102602-6736 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20211231102953-6736 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context embed-certs-20211231102953-6736 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (20.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20211231102953-6736 --alsologtostderr -v=3
E1231 10:43:22.357622    6736 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-12739-3388-7c21f9163ae8b175cef980961032eb5d83504bec/.minikube/profiles/cilium-20211231101408-6736/client.crt: no such file or directory
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20211231102953-6736 --alsologtostderr -v=3: (20.290551817s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (20.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20211231102953-6736 -n embed-certs-20211231102953-6736
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20211231102953-6736 -n embed-certs-20211231102953-6736: exit status 7 (104.986783ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20211231102953-6736 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20211231103230-6736 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context default-k8s-different-port-20211231103230-6736 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (20.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20211231103230-6736 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20211231103230-6736 --alsologtostderr -v=3: (20.367124428s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (20.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20211231103230-6736 -n default-k8s-different-port-20211231103230-6736
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20211231103230-6736 -n default-k8s-different-port-20211231103230-6736: exit status 7 (109.245323ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20211231103230-6736 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    

Test skip (25/266)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.1/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.1/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.1/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.2-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.2-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2-rc.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.2-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:449: Skipping Olm addon till images are fixed
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:36: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:187: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:446: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:536: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:43: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:89: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:176: Cleaning up "kubenet-20211231101406-6736" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20211231101406-6736
--- SKIP: TestNetworkPlugins/group/kubenet (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20211231101406-6736" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20211231101406-6736
--- SKIP: TestNetworkPlugins/group/flannel (0.50s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20211231103229-6736" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20211231103229-6736
--- SKIP: TestStartStop/group/disable-driver-mounts (0.50s)

                                                
                                    
Copied to clipboard