=== RUN TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run: out/minikube-linux-amd64 start -p old-k8s-version-20220325015306-262786 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.16.0
=== CONT TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-20220325015306-262786 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.16.0: exit status 80 (8m19.462655006s)
-- stdout --
* [old-k8s-version-20220325015306-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=13812
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the docker driver based on user configuration
- More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
* Starting control plane node old-k8s-version-20220325015306-262786 in cluster old-k8s-version-20220325015306-262786
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2200MB) ...
* docker "old-k8s-version-20220325015306-262786" container is missing, will recreate.
* Creating docker container (CPUs=2, Memory=2200MB) ...
* Preparing Kubernetes v1.16.0 on containerd 1.5.10 ...
- kubelet.cni-conf-dir=/etc/cni/net.mk
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
-- /stdout --
** stderr **
I0325 01:53:06.744250 431164 out.go:297] Setting OutFile to fd 1 ...
I0325 01:53:06.744362 431164 out.go:344] TERM=,COLORTERM=, which probably does not support color
I0325 01:53:06.744372 431164 out.go:310] Setting ErrFile to fd 2...
I0325 01:53:06.744376 431164 out.go:344] TERM=,COLORTERM=, which probably does not support color
I0325 01:53:06.744486 431164 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
I0325 01:53:06.744811 431164 out.go:304] Setting JSON to false
I0325 01:53:06.746140 431164 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":16259,"bootTime":1648156928,"procs":594,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0325 01:53:06.746212 431164 start.go:125] virtualization: kvm guest
I0325 01:53:06.886302 431164 out.go:176] * [old-k8s-version-20220325015306-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
I0325 01:53:06.886522 431164 notify.go:193] Checking for updates...
I0325 01:53:07.082946 431164 out.go:176] - MINIKUBE_LOCATION=13812
I0325 01:53:07.097889 431164 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0325 01:53:07.100205 431164 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
I0325 01:53:07.101930 431164 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
I0325 01:53:07.103536 431164 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64
I0325 01:53:07.104276 431164 config.go:176] Loaded profile config "auto-20220325014919-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
I0325 01:53:07.104454 431164 config.go:176] Loaded profile config "kubernetes-upgrade-20220325015003-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.4-rc.0
I0325 01:53:07.104578 431164 config.go:176] Loaded profile config "running-upgrade-20220325014921-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0325 01:53:07.104639 431164 driver.go:346] Setting default libvirt URI to qemu:///system
I0325 01:53:07.153836 431164 docker.go:136] docker version: linux-20.10.14
I0325 01:53:07.153956 431164 cli_runner.go:133] Run: docker system info --format "{{json .}}"
I0325 01:53:07.263132 431164 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:true NGoroutines:69 SystemTime:2022-03-25 01:53:07.188979319 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0325 01:53:07.263257 431164 docker.go:253] overlay module found
I0325 01:53:07.267007 431164 out.go:176] * Using the docker driver based on user configuration
I0325 01:53:07.267047 431164 start.go:284] selected driver: docker
I0325 01:53:07.267053 431164 start.go:801] validating driver "docker" against <nil>
I0325 01:53:07.267074 431164 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
W0325 01:53:07.267123 431164 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W0325 01:53:07.267144 431164 out.go:241] ! Your cgroup does not allow setting memory.
! Your cgroup does not allow setting memory.
I0325 01:53:07.268782 431164 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
I0325 01:53:07.269411 431164 cli_runner.go:133] Run: docker system info --format "{{json .}}"
I0325 01:53:07.379145 431164 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:true NGoroutines:69 SystemTime:2022-03-25 01:53:07.305618135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0325 01:53:07.379310 431164 start_flags.go:290] no existing cluster config was found, will generate one from the flags
I0325 01:53:07.379511 431164 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0325 01:53:07.379535 431164 cni.go:93] Creating CNI manager for ""
I0325 01:53:07.379542 431164 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
I0325 01:53:07.379550 431164 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I0325 01:53:07.379559 431164 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I0325 01:53:07.379565 431164 start_flags.go:299] Found "CNI" CNI - setting NetworkPlugin=cni
I0325 01:53:07.379578 431164 start_flags.go:304] config:
{Name:old-k8s-version-20220325015306-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220325015306-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0325 01:53:07.403164 431164 out.go:176] * Starting control plane node old-k8s-version-20220325015306-262786 in cluster old-k8s-version-20220325015306-262786
I0325 01:53:07.403218 431164 cache.go:120] Beginning downloading kic base image for docker with containerd
I0325 01:53:07.405626 431164 out.go:176] * Pulling base image ...
I0325 01:53:07.405667 431164 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
I0325 01:53:07.405710 431164 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4
I0325 01:53:07.405726 431164 cache.go:57] Caching tarball of preloaded images
I0325 01:53:07.405760 431164 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
I0325 01:53:07.405992 431164 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0325 01:53:07.406016 431164 cache.go:60] Finished verifying existence of preloaded tar for v1.16.0 on containerd
I0325 01:53:07.406151 431164 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/config.json ...
I0325 01:53:07.406184 431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/config.json: {Name:mk5e2f006e0e19c174c7a53c7f043140e531ad14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:53:07.454855 431164 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
I0325 01:53:07.454890 431164 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
I0325 01:53:07.454919 431164 cache.go:208] Successfully downloaded all kic artifacts
I0325 01:53:07.454984 431164 start.go:348] acquiring machines lock for old-k8s-version-20220325015306-262786: {Name:mk6f712225030023aec99b26d6c356d6d62f23e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0325 01:53:07.455134 431164 start.go:352] acquired machines lock for "old-k8s-version-20220325015306-262786" in 113.509µs
I0325 01:53:07.455167 431164 start.go:90] Provisioning new machine with config: &{Name:old-k8s-version-20220325015306-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220325015306-262786 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0325 01:53:07.455280 431164 start.go:127] createHost starting for "" (driver="docker")
I0325 01:53:07.457995 431164 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0325 01:53:07.458326 431164 start.go:161] libmachine.API.Create for "old-k8s-version-20220325015306-262786" (driver="docker")
I0325 01:53:07.458370 431164 client.go:168] LocalClient.Create starting
I0325 01:53:07.458463 431164 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
I0325 01:53:07.458523 431164 main.go:130] libmachine: Decoding PEM data...
I0325 01:53:07.458550 431164 main.go:130] libmachine: Parsing certificate...
I0325 01:53:07.458632 431164 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
I0325 01:53:07.458659 431164 main.go:130] libmachine: Decoding PEM data...
I0325 01:53:07.458681 431164 main.go:130] libmachine: Parsing certificate...
I0325 01:53:07.459176 431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0325 01:53:07.499630 431164 cli_runner.go:180] docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0325 01:53:07.499703 431164 network_create.go:254] running [docker network inspect old-k8s-version-20220325015306-262786] to gather additional debugging logs...
I0325 01:53:07.499732 431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786
W0325 01:53:07.540491 431164 cli_runner.go:180] docker network inspect old-k8s-version-20220325015306-262786 returned with exit code 1
I0325 01:53:07.540532 431164 network_create.go:257] error running [docker network inspect old-k8s-version-20220325015306-262786]: docker network inspect old-k8s-version-20220325015306-262786: exit status 1
stdout:
[]
stderr:
Error: No such network: old-k8s-version-20220325015306-262786
I0325 01:53:07.540563 431164 network_create.go:259] output of [docker network inspect old-k8s-version-20220325015306-262786]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: old-k8s-version-20220325015306-262786
** /stderr **
I0325 01:53:07.540653 431164 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0325 01:53:07.596601 431164 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-23ae52b3b8f2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:b7:bb:c1:32}}
I0325 01:53:07.597575 431164 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-16647239848e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:47:fb:23:78}}
I0325 01:53:07.598613 431164 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc00013e8e8] misses:0}
I0325 01:53:07.598656 431164 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0325 01:53:07.598673 431164 network_create.go:106] attempt to create docker network old-k8s-version-20220325015306-262786 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I0325 01:53:07.598722 431164 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220325015306-262786
I0325 01:53:07.736169 431164 network_create.go:90] docker network old-k8s-version-20220325015306-262786 192.168.67.0/24 created
I0325 01:53:07.736216 431164 kic.go:106] calculated static IP "192.168.67.2" for the "old-k8s-version-20220325015306-262786" container
I0325 01:53:07.736267 431164 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
I0325 01:53:07.777633 431164 cli_runner.go:133] Run: docker volume create old-k8s-version-20220325015306-262786 --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --label created_by.minikube.sigs.k8s.io=true
I0325 01:53:07.813476 431164 oci.go:102] Successfully created a docker volume old-k8s-version-20220325015306-262786
I0325 01:53:07.813560 431164 cli_runner.go:133] Run: docker run --rm --name old-k8s-version-20220325015306-262786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --entrypoint /usr/bin/test -v old-k8s-version-20220325015306-262786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
I0325 01:53:09.710810 431164 cli_runner.go:186] Completed: docker run --rm --name old-k8s-version-20220325015306-262786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --entrypoint /usr/bin/test -v old-k8s-version-20220325015306-262786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib: (1.897172429s)
I0325 01:53:09.710849 431164 oci.go:106] Successfully prepared a docker volume old-k8s-version-20220325015306-262786
I0325 01:53:09.710897 431164 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
I0325 01:53:09.710924 431164 kic.go:179] Starting extracting preloaded images to volume ...
I0325 01:53:09.711017 431164 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220325015306-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
I0325 01:53:18.109802 431164 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220325015306-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (8.398715615s)
I0325 01:53:18.109852 431164 kic.go:188] duration metric: took 8.398924 seconds to extract preloaded images to volume
W0325 01:53:18.109888 431164 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0325 01:53:18.109898 431164 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0325 01:53:18.109956 431164 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
I0325 01:53:18.228621 431164 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220325015306-262786 --name old-k8s-version-20220325015306-262786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --network old-k8s-version-20220325015306-262786 --ip 192.168.67.2 --volume old-k8s-version-20220325015306-262786:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
W0325 01:53:18.309373 431164 cli_runner.go:180] docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220325015306-262786 --name old-k8s-version-20220325015306-262786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --network old-k8s-version-20220325015306-262786 --ip 192.168.67.2 --volume old-k8s-version-20220325015306-262786:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 returned with exit code 125
I0325 01:53:18.309445 431164 client.go:171] LocalClient.Create took 10.851065706s
I0325 01:53:20.309747 431164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0325 01:53:20.309841 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
W0325 01:53:20.349287 431164 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786 returned with exit code 1
I0325 01:53:20.349415 431164 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0325 01:53:20.625849 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
W0325 01:53:20.665234 431164 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786 returned with exit code 1
I0325 01:53:20.665363 431164 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0325 01:53:21.206079 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
W0325 01:53:21.246889 431164 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786 returned with exit code 1
I0325 01:53:21.247027 431164 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0325 01:53:21.902781 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
W0325 01:53:21.939759 431164 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786 returned with exit code 1
W0325 01:53:21.939865 431164 start.go:277] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
W0325 01:53:21.939880 431164 start.go:244] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0325 01:53:21.939912 431164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0325 01:53:21.939940 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
W0325 01:53:21.972264 431164 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786 returned with exit code 1
I0325 01:53:21.972406 431164 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0325 01:53:22.203802 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
W0325 01:53:22.236559 431164 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786 returned with exit code 1
I0325 01:53:22.236676 431164 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0325 01:53:22.681982 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
W0325 01:53:22.711090 431164 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786 returned with exit code 1
I0325 01:53:22.711189 431164 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0325 01:53:23.029697 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
W0325 01:53:23.061316 431164 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786 returned with exit code 1
I0325 01:53:23.061407 431164 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0325 01:53:23.616238 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
W0325 01:53:23.646212 431164 cli_runner.go:180] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786 returned with exit code 1
W0325 01:53:23.646343 431164 start.go:292] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
W0325 01:53:23.646363 431164 start.go:249] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
I0325 01:53:23.646392 431164 start.go:130] duration metric: createHost completed in 16.191098684s
I0325 01:53:23.646401 431164 start.go:81] releasing machines lock for "old-k8s-version-20220325015306-262786", held for 16.191256374s
W0325 01:53:23.646435 431164 start.go:570] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220325015306-262786 --name old-k8s-version-20220325015306-262786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --network old-k8s-version-20220325015306-262786 --ip 192.168.67.2 --volume old-k8s-version-20220325015306-262786:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e0
5ce4421985fe9bd7bdca30a55218347b5: exit status 125
stdout:
70db97c1e507dd38002925bf640879383cbadb553804ce2496e418013a3ab218
stderr:
docker: Error response from daemon: network old-k8s-version-20220325015306-262786 not found.
I0325 01:53:23.646876 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
W0325 01:53:23.674964 431164 start.go:575] delete host: Docker machine "old-k8s-version-20220325015306-262786" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
W0325 01:53:23.675199 431164 out.go:241] ! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220325015306-262786 --name old-k8s-version-20220325015306-262786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --network old-k8s-version-20220325015306-262786 --ip 192.168.67.2 --volume old-k8s-version-20220325015306-262786:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da
728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5: exit status 125
stdout:
70db97c1e507dd38002925bf640879383cbadb553804ce2496e418013a3ab218
stderr:
docker: Error response from daemon: network old-k8s-version-20220325015306-262786 not found.
! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220325015306-262786 --name old-k8s-version-20220325015306-262786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --network old-k8s-version-20220325015306-262786 --ip 192.168.67.2 --volume old-k8s-version-20220325015306-262786:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55
218347b5: exit status 125
stdout:
70db97c1e507dd38002925bf640879383cbadb553804ce2496e418013a3ab218
stderr:
docker: Error response from daemon: network old-k8s-version-20220325015306-262786 not found.
I0325 01:53:23.675224 431164 start.go:585] Will try again in 5 seconds ...
I0325 01:53:28.676153 431164 start.go:348] acquiring machines lock for old-k8s-version-20220325015306-262786: {Name:mk6f712225030023aec99b26d6c356d6d62f23e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0325 01:53:28.676313 431164 start.go:352] acquired machines lock for "old-k8s-version-20220325015306-262786" in 115.05µs
I0325 01:53:28.676353 431164 start.go:94] Skipping create...Using existing machine configuration
I0325 01:53:28.676363 431164 fix.go:55] fixHost starting:
I0325 01:53:28.676888 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:53:28.723054 431164 fix.go:108] recreateIfNeeded on old-k8s-version-20220325015306-262786: state= err=<nil>
I0325 01:53:28.723096 431164 fix.go:113] machineExists: false. err=machine does not exist
I0325 01:53:28.724752 431164 out.go:176] * docker "old-k8s-version-20220325015306-262786" container is missing, will recreate.
I0325 01:53:28.724781 431164 delete.go:124] DEMOLISHING old-k8s-version-20220325015306-262786 ...
I0325 01:53:28.724842 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:53:28.756595 431164 stop.go:79] host is in state
I0325 01:53:28.756631 431164 main.go:130] libmachine: Stopping "old-k8s-version-20220325015306-262786"...
I0325 01:53:28.756698 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:53:28.789590 431164 kic_runner.go:93] Run: systemctl --version
I0325 01:53:28.789616 431164 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220325015306-262786 systemctl --version]
I0325 01:53:28.830431 431164 kic_runner.go:93] Run: sudo service kubelet stop
I0325 01:53:28.830456 431164 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220325015306-262786 sudo service kubelet stop]
I0325 01:53:28.875260 431164 openrc.go:165] stop output:
** stderr **
Error response from daemon: Container 70db97c1e507dd38002925bf640879383cbadb553804ce2496e418013a3ab218 is not running
** /stderr **
W0325 01:53:28.875281 431164 kic.go:443] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
stdout:
stderr:
Error response from daemon: Container 70db97c1e507dd38002925bf640879383cbadb553804ce2496e418013a3ab218 is not running
I0325 01:53:28.875341 431164 kic_runner.go:93] Run: sudo service kubelet stop
I0325 01:53:28.875353 431164 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220325015306-262786 sudo service kubelet stop]
I0325 01:53:28.939087 431164 openrc.go:165] stop output:
** stderr **
Error response from daemon: Container 70db97c1e507dd38002925bf640879383cbadb553804ce2496e418013a3ab218 is not running
** /stderr **
W0325 01:53:28.939115 431164 kic.go:445] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
stdout:
stderr:
Error response from daemon: Container 70db97c1e507dd38002925bf640879383cbadb553804ce2496e418013a3ab218 is not running
I0325 01:53:28.939136 431164 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
I0325 01:53:28.939214 431164 kic_runner.go:93] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
I0325 01:53:28.939238 431164 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220325015306-262786 sudo -s eval crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator]
I0325 01:53:28.981135 431164 kic.go:456] unable list containers : crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": exit status 1
stdout:
stderr:
Error response from daemon: Container 70db97c1e507dd38002925bf640879383cbadb553804ce2496e418013a3ab218 is not running
I0325 01:53:28.981166 431164 kic.go:466] successfully stopped kubernetes!
I0325 01:53:28.981217 431164 kic_runner.go:93] Run: pgrep kube-apiserver
I0325 01:53:28.981227 431164 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220325015306-262786 pgrep kube-apiserver]
I0325 01:53:29.085088 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:53:32.136545 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:53:35.171131 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:53:38.206727 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:53:41.242333 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:53:44.276035 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:53:47.317900 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:53:50.363044 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:53:53.398845 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:53:56.467091 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:53:59.511115 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:54:02.556552 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:54:05.591089 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:54:08.645602 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:54:11.683108 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:54:14.736042 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:54:17.769080 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:54:20.804717 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:54:23.851088 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:54:26.885627 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:54:29.920168 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:54:32.955019 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:54:35.989701 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:54:39.039107 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:54:42.070890 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:54:45.104461 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:54:48.139081 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:54:51.171105 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:54:54.203995 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:54:57.236361 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:00.275444 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:03.334100 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:06.368209 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:09.407114 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:12.439881 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:15.478625 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:18.513302 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:21.545857 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:24.580164 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:27.612926 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:30.649664 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:33.683100 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:36.715325 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:39.751091 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:42.785301 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:45.821589 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:48.854113 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:51.885844 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:54.919097 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:57.951168 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:00.986964 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:04.022375 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:07.054544 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:10.088923 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:13.121694 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:16.158628 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:19.193066 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:22.229496 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:25.263135 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:28.299080 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:31.335036 431164 stop.go:59] stop err: Maximum number of retries (60) exceeded
I0325 01:56:31.335082 431164 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
I0325 01:56:31.335570 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
W0325 01:56:31.369049 431164 delete.go:135] deletehost failed: Docker machine "old-k8s-version-20220325015306-262786" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
I0325 01:56:31.369136 431164 cli_runner.go:133] Run: docker container inspect -f {{.Id}} old-k8s-version-20220325015306-262786
I0325 01:56:31.404692 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:31.436643 431164 cli_runner.go:133] Run: docker exec --privileged -t old-k8s-version-20220325015306-262786 /bin/bash -c "sudo init 0"
W0325 01:56:31.469236 431164 cli_runner.go:180] docker exec --privileged -t old-k8s-version-20220325015306-262786 /bin/bash -c "sudo init 0" returned with exit code 1
I0325 01:56:31.469271 431164 oci.go:659] error shutdown old-k8s-version-20220325015306-262786: docker exec --privileged -t old-k8s-version-20220325015306-262786 /bin/bash -c "sudo init 0": exit status 1
stdout:
stderr:
Error response from daemon: Container 70db97c1e507dd38002925bf640879383cbadb553804ce2496e418013a3ab218 is not running
I0325 01:56:32.470272 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:32.503561 431164 oci.go:673] temporary error: container old-k8s-version-20220325015306-262786 status is but expect it to be exited
I0325 01:56:32.503590 431164 oci.go:679] Successfully shutdown container old-k8s-version-20220325015306-262786
I0325 01:56:32.503641 431164 cli_runner.go:133] Run: docker rm -f -v old-k8s-version-20220325015306-262786
I0325 01:56:32.540810 431164 cli_runner.go:133] Run: docker container inspect -f {{.Id}} old-k8s-version-20220325015306-262786
W0325 01:56:32.570903 431164 cli_runner.go:180] docker container inspect -f {{.Id}} old-k8s-version-20220325015306-262786 returned with exit code 1
I0325 01:56:32.571005 431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0325 01:56:32.601633 431164 cli_runner.go:180] docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0325 01:56:32.601695 431164 network_create.go:254] running [docker network inspect old-k8s-version-20220325015306-262786] to gather additional debugging logs...
I0325 01:56:32.601719 431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786
W0325 01:56:32.632633 431164 cli_runner.go:180] docker network inspect old-k8s-version-20220325015306-262786 returned with exit code 1
I0325 01:56:32.632663 431164 network_create.go:257] error running [docker network inspect old-k8s-version-20220325015306-262786]: docker network inspect old-k8s-version-20220325015306-262786: exit status 1
stdout:
[]
stderr:
Error: No such network: old-k8s-version-20220325015306-262786
I0325 01:56:32.632678 431164 network_create.go:259] output of [docker network inspect old-k8s-version-20220325015306-262786]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: old-k8s-version-20220325015306-262786
** /stderr **
W0325 01:56:32.632818 431164 delete.go:139] delete failed (probably ok) <nil>
I0325 01:56:32.632831 431164 fix.go:120] Sleeping 1 second for extra luck!
I0325 01:56:33.633777 431164 start.go:127] createHost starting for "" (driver="docker")
I0325 01:56:33.636953 431164 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0325 01:56:33.637111 431164 start.go:161] libmachine.API.Create for "old-k8s-version-20220325015306-262786" (driver="docker")
I0325 01:56:33.637158 431164 client.go:168] LocalClient.Create starting
I0325 01:56:33.637270 431164 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
I0325 01:56:33.637315 431164 main.go:130] libmachine: Decoding PEM data...
I0325 01:56:33.637341 431164 main.go:130] libmachine: Parsing certificate...
I0325 01:56:33.637420 431164 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
I0325 01:56:33.637448 431164 main.go:130] libmachine: Decoding PEM data...
I0325 01:56:33.637471 431164 main.go:130] libmachine: Parsing certificate...
I0325 01:56:33.637805 431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0325 01:56:33.670584 431164 cli_runner.go:180] docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0325 01:56:33.670681 431164 network_create.go:254] running [docker network inspect old-k8s-version-20220325015306-262786] to gather additional debugging logs...
I0325 01:56:33.670699 431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786
W0325 01:56:33.700818 431164 cli_runner.go:180] docker network inspect old-k8s-version-20220325015306-262786 returned with exit code 1
I0325 01:56:33.700851 431164 network_create.go:257] error running [docker network inspect old-k8s-version-20220325015306-262786]: docker network inspect old-k8s-version-20220325015306-262786: exit status 1
stdout:
[]
stderr:
Error: No such network: old-k8s-version-20220325015306-262786
I0325 01:56:33.700871 431164 network_create.go:259] output of [docker network inspect old-k8s-version-20220325015306-262786]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: old-k8s-version-20220325015306-262786
** /stderr **
I0325 01:56:33.700917 431164 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0325 01:56:33.731365 431164 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-fcb21d43dbbf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:db:45:ae:c5}}
I0325 01:56:33.732243 431164 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-a040cc4bab62 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:d0:f2:08:b6}}
I0325 01:56:33.733015 431164 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-12bda0d2312e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:de:32:64:a8}}
I0325 01:56:33.733812 431164 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc00013e8e8 192.168.76.0:0xc000702388] misses:0}
I0325 01:56:33.733853 431164 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0325 01:56:33.733877 431164 network_create.go:106] attempt to create docker network old-k8s-version-20220325015306-262786 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I0325 01:56:33.733929 431164 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220325015306-262786
I0325 01:56:33.801121 431164 network_create.go:90] docker network old-k8s-version-20220325015306-262786 192.168.76.0/24 created
I0325 01:56:33.801156 431164 kic.go:106] calculated static IP "192.168.76.2" for the "old-k8s-version-20220325015306-262786" container
I0325 01:56:33.801207 431164 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
I0325 01:56:33.833969 431164 cli_runner.go:133] Run: docker volume create old-k8s-version-20220325015306-262786 --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --label created_by.minikube.sigs.k8s.io=true
I0325 01:56:33.863735 431164 oci.go:102] Successfully created a docker volume old-k8s-version-20220325015306-262786
I0325 01:56:33.863800 431164 cli_runner.go:133] Run: docker run --rm --name old-k8s-version-20220325015306-262786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --entrypoint /usr/bin/test -v old-k8s-version-20220325015306-262786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
I0325 01:56:34.361286 431164 oci.go:106] Successfully prepared a docker volume old-k8s-version-20220325015306-262786
I0325 01:56:34.361350 431164 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
I0325 01:56:34.361371 431164 kic.go:179] Starting extracting preloaded images to volume ...
I0325 01:56:34.361435 431164 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220325015306-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
I0325 01:56:43.174328 431164 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220325015306-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (8.812845537s)
I0325 01:56:43.174371 431164 kic.go:188] duration metric: took 8.812995 seconds to extract preloaded images to volume
W0325 01:56:43.174413 431164 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0325 01:56:43.174420 431164 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0325 01:56:43.174472 431164 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
I0325 01:56:43.265519 431164 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220325015306-262786 --name old-k8s-version-20220325015306-262786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --network old-k8s-version-20220325015306-262786 --ip 192.168.76.2 --volume old-k8s-version-20220325015306-262786:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
I0325 01:56:43.664728 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Running}}
I0325 01:56:43.700561 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:43.732786 431164 cli_runner.go:133] Run: docker exec old-k8s-version-20220325015306-262786 stat /var/lib/dpkg/alternatives/iptables
I0325 01:56:43.800760 431164 oci.go:281] the created container "old-k8s-version-20220325015306-262786" has a running status.
I0325 01:56:43.800796 431164 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa...
I0325 01:56:43.897798 431164 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0325 01:56:44.005992 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:44.040565 431164 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0325 01:56:44.040590 431164 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220325015306-262786 chown docker:docker /home/docker/.ssh/authorized_keys]
I0325 01:56:44.141276 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:44.181329 431164 machine.go:88] provisioning docker machine ...
I0325 01:56:44.181386 431164 ubuntu.go:169] provisioning hostname "old-k8s-version-20220325015306-262786"
I0325 01:56:44.181456 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:44.218999 431164 main.go:130] libmachine: Using SSH client type: native
I0325 01:56:44.219333 431164 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil> [] 0s} 127.0.0.1 49539 <nil> <nil>}
I0325 01:56:44.219364 431164 main.go:130] libmachine: About to run SSH command:
sudo hostname old-k8s-version-20220325015306-262786 && echo "old-k8s-version-20220325015306-262786" | sudo tee /etc/hostname
I0325 01:56:44.346895 431164 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220325015306-262786
I0325 01:56:44.347002 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:44.378982 431164 main.go:130] libmachine: Using SSH client type: native
I0325 01:56:44.379158 431164 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil> [] 0s} 127.0.0.1 49539 <nil> <nil>}
I0325 01:56:44.379177 431164 main.go:130] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-20220325015306-262786' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220325015306-262786/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-20220325015306-262786' | sudo tee -a /etc/hosts;
fi
fi
I0325 01:56:44.499114 431164 main.go:130] libmachine: SSH cmd err, output: <nil>:
I0325 01:56:44.499153 431164 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
I0325 01:56:44.499174 431164 ubuntu.go:177] setting up certificates
I0325 01:56:44.499184 431164 provision.go:83] configureAuth start
I0325 01:56:44.499239 431164 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220325015306-262786
I0325 01:56:44.532553 431164 provision.go:138] copyHostCerts
I0325 01:56:44.532637 431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
I0325 01:56:44.532651 431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
I0325 01:56:44.532750 431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
I0325 01:56:44.532836 431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
I0325 01:56:44.532855 431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
I0325 01:56:44.532882 431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
I0325 01:56:44.532930 431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
I0325 01:56:44.532938 431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
I0325 01:56:44.532957 431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
I0325 01:56:44.532998 431164 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220325015306-262786 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220325015306-262786]
I0325 01:56:44.716034 431164 provision.go:172] copyRemoteCerts
I0325 01:56:44.716095 431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0325 01:56:44.716131 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:44.750262 431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
I0325 01:56:44.842652 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
I0325 01:56:44.860534 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0325 01:56:44.877456 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0325 01:56:44.894710 431164 provision.go:86] duration metric: configureAuth took 395.50834ms
I0325 01:56:44.894744 431164 ubuntu.go:193] setting minikube options for container-runtime
I0325 01:56:44.894925 431164 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
I0325 01:56:44.894941 431164 machine.go:91] provisioned docker machine in 713.577559ms
I0325 01:56:44.894947 431164 client.go:171] LocalClient.Create took 11.257778857s
I0325 01:56:44.894990 431164 start.go:169] duration metric: libmachine.API.Create for "old-k8s-version-20220325015306-262786" took 11.257879213s
I0325 01:56:44.895011 431164 start.go:302] post-start starting for "old-k8s-version-20220325015306-262786" (driver="docker")
I0325 01:56:44.895022 431164 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0325 01:56:44.895080 431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0325 01:56:44.895130 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:44.927429 431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
I0325 01:56:45.014679 431164 ssh_runner.go:195] Run: cat /etc/os-release
I0325 01:56:45.017487 431164 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0325 01:56:45.017516 431164 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0325 01:56:45.017525 431164 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0325 01:56:45.017530 431164 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0325 01:56:45.017538 431164 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
I0325 01:56:45.017604 431164 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
I0325 01:56:45.017669 431164 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
I0325 01:56:45.017744 431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0325 01:56:45.024070 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
I0325 01:56:45.041483 431164 start.go:305] post-start completed in 146.454729ms
I0325 01:56:45.041798 431164 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220325015306-262786
I0325 01:56:45.076182 431164 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/config.json ...
I0325 01:56:45.076420 431164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0325 01:56:45.076458 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:45.108209 431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
I0325 01:56:45.195204 431164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0325 01:56:45.198866 431164 start.go:130] duration metric: createHost completed in 11.565060546s
I0325 01:56:45.198964 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
W0325 01:56:45.231974 431164 fix.go:134] unexpected machine state, will restart: <nil>
I0325 01:56:45.232009 431164 machine.go:88] provisioning docker machine ...
I0325 01:56:45.232033 431164 ubuntu.go:169] provisioning hostname "old-k8s-version-20220325015306-262786"
I0325 01:56:45.232086 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:45.262455 431164 main.go:130] libmachine: Using SSH client type: native
I0325 01:56:45.262621 431164 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil> [] 0s} 127.0.0.1 49539 <nil> <nil>}
I0325 01:56:45.262636 431164 main.go:130] libmachine: About to run SSH command:
sudo hostname old-k8s-version-20220325015306-262786 && echo "old-k8s-version-20220325015306-262786" | sudo tee /etc/hostname
I0325 01:56:45.386554 431164 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220325015306-262786
I0325 01:56:45.386637 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:45.419901 431164 main.go:130] libmachine: Using SSH client type: native
I0325 01:56:45.420066 431164 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil> [] 0s} 127.0.0.1 49539 <nil> <nil>}
I0325 01:56:45.420098 431164 main.go:130] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-20220325015306-262786' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220325015306-262786/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-20220325015306-262786' | sudo tee -a /etc/hosts;
fi
fi
I0325 01:56:45.542421 431164 main.go:130] libmachine: SSH cmd err, output: <nil>:
I0325 01:56:45.542450 431164 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
I0325 01:56:45.542464 431164 ubuntu.go:177] setting up certificates
I0325 01:56:45.542474 431164 provision.go:83] configureAuth start
I0325 01:56:45.542517 431164 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220325015306-262786
I0325 01:56:45.575074 431164 provision.go:138] copyHostCerts
I0325 01:56:45.575139 431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
I0325 01:56:45.575151 431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
I0325 01:56:45.575209 431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
I0325 01:56:45.575301 431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
I0325 01:56:45.575311 431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
I0325 01:56:45.575333 431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
I0325 01:56:45.575380 431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
I0325 01:56:45.575388 431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
I0325 01:56:45.575407 431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
I0325 01:56:45.575453 431164 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220325015306-262786 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220325015306-262786]
I0325 01:56:45.699927 431164 provision.go:172] copyRemoteCerts
I0325 01:56:45.699978 431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0325 01:56:45.700008 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:45.732608 431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
I0325 01:56:46.059471 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0325 01:56:46.077602 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
I0325 01:56:46.094741 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0325 01:56:46.111752 431164 provision.go:86] duration metric: configureAuth took 569.266891ms
I0325 01:56:46.111780 431164 ubuntu.go:193] setting minikube options for container-runtime
I0325 01:56:46.111953 431164 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
I0325 01:56:46.111967 431164 machine.go:91] provisioned docker machine in 879.950952ms
I0325 01:56:46.111977 431164 start.go:302] post-start starting for "old-k8s-version-20220325015306-262786" (driver="docker")
I0325 01:56:46.111985 431164 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0325 01:56:46.112037 431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0325 01:56:46.112083 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:46.146009 431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
I0325 01:56:46.238610 431164 ssh_runner.go:195] Run: cat /etc/os-release
I0325 01:56:46.241357 431164 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0325 01:56:46.241383 431164 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0325 01:56:46.241391 431164 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0325 01:56:46.241399 431164 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0325 01:56:46.241413 431164 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
I0325 01:56:46.241465 431164 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
I0325 01:56:46.241560 431164 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
I0325 01:56:46.241650 431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0325 01:56:46.248459 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
I0325 01:56:46.265464 431164 start.go:305] post-start completed in 153.469791ms
I0325 01:56:46.265532 431164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0325 01:56:46.265573 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:46.297032 431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
I0325 01:56:46.382984 431164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0325 01:56:46.387252 431164 fix.go:57] fixHost completed within 3m17.71088257s
I0325 01:56:46.387290 431164 start.go:81] releasing machines lock for "old-k8s-version-20220325015306-262786", held for 3m17.710952005s
I0325 01:56:46.387387 431164 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220325015306-262786
I0325 01:56:46.430623 431164 ssh_runner.go:195] Run: sudo service crio stop
I0325 01:56:46.430668 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:46.430668 431164 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0325 01:56:46.430720 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:46.467539 431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
I0325 01:56:46.469867 431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
I0325 01:56:46.901923 431164 openrc.go:165] stop output:
I0325 01:56:46.901990 431164 ssh_runner.go:195] Run: sudo service crio status
I0325 01:56:46.918929 431164 docker.go:183] disabling docker service ...
I0325 01:56:46.918994 431164 ssh_runner.go:195] Run: sudo service docker.socket stop
I0325 01:56:47.285757 431164 openrc.go:165] stop output:
** stderr **
Failed to stop docker.socket.service: Unit docker.socket.service not loaded.
** /stderr **
E0325 01:56:47.285792 431164 docker.go:186] "Failed to stop" err=<
sudo service docker.socket stop: Process exited with status 5
stdout:
stderr:
Failed to stop docker.socket.service: Unit docker.socket.service not loaded.
> service="docker.socket"
I0325 01:56:47.285838 431164 ssh_runner.go:195] Run: sudo service docker.service stop
I0325 01:56:47.649755 431164 openrc.go:165] stop output:
** stderr **
Failed to stop docker.service.service: Unit docker.service.service not loaded.
** /stderr **
E0325 01:56:47.649784 431164 docker.go:189] "Failed to stop" err=<
sudo service docker.service stop: Process exited with status 5
stdout:
stderr:
Failed to stop docker.service.service: Unit docker.service.service not loaded.
> service="docker.service"
W0325 01:56:47.649796 431164 cruntime.go:283] disable failed: sudo service docker.service stop: Process exited with status 5
stdout:
stderr:
Failed to stop docker.service.service: Unit docker.service.service not loaded.
I0325 01:56:47.649838 431164 ssh_runner.go:195] Run: sudo service docker status
W0325 01:56:47.664778 431164 containerd.go:244] disableOthers: Docker is still active
I0325 01:56:47.664901 431164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0325 01:56:47.676728 431164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgICBzdHJlYW1
fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10
KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9
kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
I0325 01:56:47.689398 431164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0325 01:56:47.695491 431164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0325 01:56:47.701670 431164 ssh_runner.go:195] Run: sudo service containerd restart
I0325 01:56:47.775876 431164 openrc.go:152] restart output:
I0325 01:56:47.775911 431164 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
I0325 01:56:47.775957 431164 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0325 01:56:47.780036 431164 start.go:462] Will wait 60s for crictl version
I0325 01:56:47.780095 431164 ssh_runner.go:195] Run: sudo crictl version
I0325 01:56:47.808499 431164 retry.go:31] will retry after 8.009118606s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:
stderr:
time="2022-03-25T01:56:47Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
I0325 01:56:55.819167 431164 ssh_runner.go:195] Run: sudo crictl version
I0325 01:56:55.842809 431164 start.go:471] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.5.10
RuntimeApiVersion: v1alpha2
I0325 01:56:55.842867 431164 ssh_runner.go:195] Run: containerd --version
I0325 01:56:55.862493 431164 ssh_runner.go:195] Run: containerd --version
I0325 01:56:55.885291 431164 out.go:176] * Preparing Kubernetes v1.16.0 on containerd 1.5.10 ...
I0325 01:56:55.885389 431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0325 01:56:55.918381 431164 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0325 01:56:55.921728 431164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0325 01:56:55.933134 431164 out.go:176] - kubelet.cni-conf-dir=/etc/cni/net.mk
I0325 01:56:55.933231 431164 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
I0325 01:56:55.933303 431164 ssh_runner.go:195] Run: sudo crictl images --output json
I0325 01:56:55.955768 431164 containerd.go:612] all images are preloaded for containerd runtime.
I0325 01:56:55.955788 431164 containerd.go:526] Images already preloaded, skipping extraction
I0325 01:56:55.955828 431164 ssh_runner.go:195] Run: sudo crictl images --output json
I0325 01:56:55.979329 431164 containerd.go:612] all images are preloaded for containerd runtime.
I0325 01:56:55.979348 431164 cache_images.go:84] Images are preloaded, skipping loading
I0325 01:56:55.979386 431164 ssh_runner.go:195] Run: sudo crictl info
I0325 01:56:56.002748 431164 cni.go:93] Creating CNI manager for ""
I0325 01:56:56.002768 431164 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
I0325 01:56:56.002779 431164 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0325 01:56:56.002792 431164 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220325015306-262786 NodeName:old-k8s-version-20220325015306-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgro
upfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0325 01:56:56.002974 431164 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-20220325015306-262786"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: old-k8s-version-20220325015306-262786
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
kubernetesVersion: v1.16.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0325 01:56:56.003083 431164 kubeadm.go:936] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-20220325015306-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220325015306-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0325 01:56:56.003141 431164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
I0325 01:56:56.009691 431164 binaries.go:44] Found k8s binaries, skipping transfer
I0325 01:56:56.009827 431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /var/lib/minikube /etc/init.d
I0325 01:56:56.016464 431164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (582 bytes)
I0325 01:56:56.028607 431164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0325 01:56:56.041034 431164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
I0325 01:56:56.052949 431164 ssh_runner.go:362] scp memory --> /var/lib/minikube/openrc-restart-wrapper.sh (233 bytes)
I0325 01:56:56.064655 431164 ssh_runner.go:362] scp memory --> /etc/init.d/kubelet (839 bytes)
I0325 01:56:56.077971 431164 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0325 01:56:56.080686 431164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0325 01:56:56.089291 431164 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786 for IP: 192.168.76.2
I0325 01:56:56.089415 431164 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
I0325 01:56:56.089479 431164 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
I0325 01:56:56.089550 431164 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.key
I0325 01:56:56.089574 431164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.crt with IP's: []
I0325 01:56:56.173943 431164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.crt ...
I0325 01:56:56.173977 431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.crt: {Name:mk49efef0712da8d212d4d9821e0f44d60c00474 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:56:56.174212 431164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.key ...
I0325 01:56:56.174231 431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.key: {Name:mk717fd0b3391f00b7d69817a759d1a2ba6569e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:56:56.174386 431164 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key.31bdca25
I0325 01:56:56.174407 431164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0325 01:56:56.553488 431164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt.31bdca25 ...
I0325 01:56:56.553520 431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt.31bdca25: {Name:mk0db50f453f850e6693f5f3251d591297fe24c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:56:56.553723 431164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key.31bdca25 ...
I0325 01:56:56.553738 431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key.31bdca25: {Name:mk44b3f12e50b4c043237e17ee319a130c7e6799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:56:56.553849 431164 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt
I0325 01:56:56.553904 431164 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key
I0325 01:56:56.553946 431164 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.key
I0325 01:56:56.553962 431164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.crt with IP's: []
I0325 01:56:56.634118 431164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.crt ...
I0325 01:56:56.634144 431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.crt: {Name:mk41a988659c1306ddd1bb6feb42c4fcbdf737c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:56:56.634328 431164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.key ...
I0325 01:56:56.634387 431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.key: {Name:mk496346cb1866d19fd00f75f3dc225361dc4fcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:56:56.634593 431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
W0325 01:56:56.634634 431164 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
I0325 01:56:56.634643 431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
I0325 01:56:56.634663 431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
I0325 01:56:56.634688 431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
I0325 01:56:56.634714 431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
I0325 01:56:56.634755 431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
I0325 01:56:56.635301 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0325 01:56:56.653204 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0325 01:56:56.669615 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0325 01:56:56.686091 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0325 01:56:56.702278 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0325 01:56:56.718732 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0325 01:56:56.734704 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0325 01:56:56.751950 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0325 01:56:56.768370 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
I0325 01:56:56.785599 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0325 01:56:56.802704 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
I0325 01:56:56.818636 431164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0325 01:56:56.830434 431164 ssh_runner.go:195] Run: openssl version
I0325 01:56:56.834834 431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0325 01:56:56.841688 431164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0325 01:56:56.844759 431164 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
I0325 01:56:56.844799 431164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0325 01:56:56.849420 431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0325 01:56:56.856216 431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
I0325 01:56:56.863401 431164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
I0325 01:56:56.866302 431164 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
I0325 01:56:56.866341 431164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
I0325 01:56:56.871090 431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
I0325 01:56:56.878141 431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
I0325 01:56:56.885043 431164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
I0325 01:56:56.887974 431164 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
I0325 01:56:56.888019 431164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
I0325 01:56:56.892629 431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
I0325 01:56:56.899573 431164 kubeadm.go:391] StartCluster: {Name:old-k8s-version-20220325015306-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220325015306-262786 Namespace:default APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0325 01:56:56.899669 431164 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0325 01:56:56.899700 431164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0325 01:56:56.924510 431164 cri.go:87] found id: ""
I0325 01:56:56.924564 431164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0325 01:56:56.967274 431164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0325 01:56:56.974042 431164 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0325 01:56:56.974100 431164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0325 01:56:56.980509 431164 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0325 01:56:56.980549 431164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0325 01:56:57.342628 431164 out.go:203] - Generating certificates and keys ...
I0325 01:57:00.421358 431164 out.go:203] - Booting up control plane ...
I0325 01:57:10.462463 431164 out.go:203] - Configuring RBAC rules ...
I0325 01:57:10.884078 431164 cni.go:93] Creating CNI manager for ""
I0325 01:57:10.884101 431164 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
I0325 01:57:10.885886 431164 out.go:176] * Configuring CNI (Container Networking Interface) ...
I0325 01:57:10.885957 431164 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0325 01:57:10.889349 431164 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
I0325 01:57:10.889369 431164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0325 01:57:10.902215 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0325 01:57:11.219931 431164 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0325 01:57:11.220013 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:11.220072 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=old-k8s-version-20220325015306-262786 minikube.k8s.io/updated_at=2022_03_25T01_57_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:11.227208 431164 ops.go:34] apiserver oom_adj: -16
I0325 01:57:11.318580 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:11.897565 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:12.397150 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:12.897044 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:13.397714 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:13.897135 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:14.396784 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:14.897509 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:15.397532 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:15.897241 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:16.397418 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:16.897298 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:17.397490 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:17.896851 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:18.396958 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:18.897528 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:19.397449 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:19.896818 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:20.396950 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:20.897730 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:21.397699 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:21.897770 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:22.397129 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:22.897777 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:23.396809 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:23.897374 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:24.396808 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:24.897374 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:25.397510 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:25.465074 431164 kubeadm.go:1020] duration metric: took 14.245126743s to wait for elevateKubeSystemPrivileges.
I0325 01:57:25.465105 431164 kubeadm.go:393] StartCluster complete in 28.565542464s
I0325 01:57:25.465127 431164 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:57:25.465222 431164 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
I0325 01:57:25.466826 431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:57:25.982566 431164 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20220325015306-262786" rescaled to 1
I0325 01:57:25.982642 431164 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0325 01:57:25.985735 431164 out.go:176] * Verifying Kubernetes components...
I0325 01:57:25.982729 431164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0325 01:57:25.985818 431164 ssh_runner.go:195] Run: sudo service kubelet status
I0325 01:57:25.982734 431164 addons.go:415] enableAddons start: toEnable=map[], additional=[]
I0325 01:57:25.982930 431164 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
I0325 01:57:25.985917 431164 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20220325015306-262786"
I0325 01:57:25.985938 431164 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20220325015306-262786"
W0325 01:57:25.985944 431164 addons.go:165] addon storage-provisioner should already be in state true
I0325 01:57:25.985974 431164 host.go:66] Checking if "old-k8s-version-20220325015306-262786" exists ...
I0325 01:57:25.987026 431164 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20220325015306-262786"
I0325 01:57:25.987059 431164 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20220325015306-262786"
I0325 01:57:25.987464 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:57:25.987734 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:57:26.043330 431164 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0325 01:57:26.041809 431164 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20220325015306-262786"
W0325 01:57:26.043448 431164 addons.go:165] addon default-storageclass should already be in state true
I0325 01:57:26.043461 431164 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0325 01:57:26.043473 431164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0325 01:57:26.043499 431164 host.go:66] Checking if "old-k8s-version-20220325015306-262786" exists ...
I0325 01:57:26.043528 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:57:26.043990 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:57:26.079480 431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
I0325 01:57:26.080003 431164 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
I0325 01:57:26.080025 431164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0325 01:57:26.080072 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:57:26.123901 431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
I0325 01:57:26.130675 431164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0325 01:57:26.132207 431164 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20220325015306-262786" to be "Ready" ...
I0325 01:57:26.203910 431164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0325 01:57:26.305985 431164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0325 01:57:26.701311 431164 start.go:777] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
I0325 01:57:26.884863 431164 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
I0325 01:57:26.884915 431164 addons.go:417] enableAddons completed in 902.209882ms
I0325 01:57:28.137240 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:30.137382 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:32.137902 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:34.636994 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:36.637231 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:38.637618 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:41.138151 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:43.637420 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:46.137000 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:48.137252 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:50.137524 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:52.638010 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:55.137979 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:57.637645 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:00.137151 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:02.137531 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:04.137755 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:06.637823 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:09.137247 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:11.137649 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:13.138175 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:15.637967 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:18.137346 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:20.137621 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:22.138039 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:24.637505 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:26.637944 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:28.638663 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:31.137778 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:33.137957 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:35.637360 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:37.637456 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:40.137522 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:42.637830 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:44.638149 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:47.137013 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:49.137465 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:51.137831 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:53.138061 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:55.637301 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:57.637937 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:00.137993 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:02.138041 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:04.138262 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:06.637907 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:09.139879 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:11.637442 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:13.637538 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:15.639122 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:18.137261 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:20.137829 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:22.637466 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:24.637948 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:27.137486 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:29.137528 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:31.137566 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:33.138065 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:35.637535 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:37.637991 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:39.638114 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:42.137688 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:44.637241 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:46.637686 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:49.137625 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:51.638236 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:54.137670 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:56.138392 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:58.637751 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:00.638089 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:03.137541 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:05.637552 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:08.137145 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:10.137534 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:12.637732 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:15.138150 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:17.637995 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:20.137994 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:22.637195 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:24.638276 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:27.137477 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:29.138059 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:31.138114 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:33.637955 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:35.638305 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:38.137342 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:40.138018 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:42.638060 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:45.137181 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:47.137290 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:49.137908 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:51.638340 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:54.137713 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:56.637016 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:58.637267 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:00.637464 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:02.638041 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:05.137294 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:07.137350 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:09.137969 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:11.638005 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:14.137955 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:16.637434 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:18.637978 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:21.137203 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:23.137475 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:25.137628 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:26.139331 431164 node_ready.go:38] duration metric: took 4m0.007092133s waiting for node "old-k8s-version-20220325015306-262786" to be "Ready" ...
I0325 02:01:26.141382 431164 out.go:176]
W0325 02:01:26.141510 431164 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
W0325 02:01:26.141527 431164 out.go:241] *
*
W0325 02:01:26.142250 431164 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0325 02:01:26.143976 431164 out.go:176]
** /stderr **
start_stop_delete_test.go:173: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-20220325015306-262786 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:232: (dbg) Run: docker inspect old-k8s-version-20220325015306-262786
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20220325015306-262786:
-- stdout --
[
{
"Id": "e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b",
"Created": "2022-03-25T01:56:43.297059247Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 457693,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-03-25T01:56:43.655669688Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
"ResolvConfPath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/hostname",
"HostsPath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/hosts",
"LogPath": "/var/lib/docker/containers/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b/e6a4c0e8f4c7486a50d4874ff2263423feadcfce0ee470b20fd1780d30d5156b-json.log",
"Name": "/old-k8s-version-20220325015306-262786",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-20220325015306-262786:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-20220325015306-262786",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [
{
"PathOnHost": "/dev/fuse",
"PathInContainer": "/dev/fuse",
"CgroupPermissions": "rwm"
}
],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3-init/diff:/var/lib/docker/overlay2/be54eb94fcdf63a43430c63a2deca34069b6322a2c5c80bf32c41c253b4eef44/diff:/var/lib/docker/overlay2/21ae1babc9289fff260c3571871aeb833b37e21656a9cc3eb8df07eb3fe4f79a/diff:/var/lib/docker/overlay2/3ee3331c2ddb88569b411d0ab54a2ef7d7d01ca16c80ced0833580bfbe9bdada/diff:/var/lib/docker/overlay2/d8bc8d60c9bd47ff1095b644ef6d44396d637a148bcebd5ea3b7706fee6b13fe/diff:/var/lib/docker/overlay2/ca1519d93c9c70a99f709b179bab33e31837f4b561c407c362770656a0ad970a/diff:/var/lib/docker/overlay2/8b7ed626d5c01c442f80e5e9bbe87bdaa4e3b209e4d0720010e78ab32631a44a/diff:/var/lib/docker/overlay2/fb54723378f675b6bc72cd8608807007fdf0fc435e1383398764588c2881dcc7/diff:/var/lib/docker/overlay2/20deb1df880f3adcdce0caa0e0b6ce0170bb01f7b7c564aa7c00c10e886a8422/diff:/var/lib/docker/overlay2/3e9c58516a6ca7eb07cbd77ece10826bcffc2c564c20a046413c894f1e457c14/diff:/var/lib/docker/overlay2/9fb4a5
72727350e63058db77497edb3aa8f3fd157bf3faa4b882f3d6218a2d2c/diff:/var/lib/docker/overlay2/2dad70b776042365cd2686f6925d1728b98e82e82f5ec21fcafaa6ce796653ed/diff:/var/lib/docker/overlay2/d94272e0e249e656b05e1483e035d137254d3bab6b9c568065d1f8783a72cf04/diff:/var/lib/docker/overlay2/c92254120acded698585ef577c9ac3d6f73267981cf36a87ee38ccd694f47b94/diff:/var/lib/docker/overlay2/84b4bbb670c367ba779baabe503b5345e2c3e2beb5a4505c3b235e5db4e89ee6/diff:/var/lib/docker/overlay2/4981a02b24aef7d5c066a42837381dcdd4a299b491d8e55523fca674cd0db0d1/diff:/var/lib/docker/overlay2/c3c34e9c466bb3a144a51042f0930825943916afe285a7f97644c400518f341f/diff:/var/lib/docker/overlay2/44f4921d100d6ba90db390588513726503aec84844325bd99eeb137c6018277f/diff:/var/lib/docker/overlay2/a39a458488b7f863079e4c6b58196e8a4f9082987519a734c45a007cd0d94828/diff:/var/lib/docker/overlay2/f0312047c7b0b02fd66fd826e23406e40cee0ca3ceecfd3ead5dcecbc5026230/diff:/var/lib/docker/overlay2/96c9397a20500e41888794ddb5877995a1734042648a24d59ca0d2ec5021e9de/diff:/var/lib/d
ocker/overlay2/8dea2fad08fc7127380e7b5ee48074c49d9bb8abb4e0e626d1753b47e734e16a/diff:/var/lib/docker/overlay2/b45ce3d74626e250be956220b3bdd19784c7b5f160566cf2abc4e3bebec2e787/diff:/var/lib/docker/overlay2/c53d5b53646725c2e75d104fbdf63f67b1a6d4ec7be410f678c39db7ca88704f/diff:/var/lib/docker/overlay2/c989625713fed7c79c6acf122f86cb4a5d36c5c25f16b6ff042aba0f5c76ef40/diff:/var/lib/docker/overlay2/062c90de70f705242f19d7fd008480be165d852e536336d97bcfe7aaba03bc2c/diff:/var/lib/docker/overlay2/9dd5e3e1997449a8dd0820e30ab1aa5b34db265e9783f9431ebcca7ceaf17510/diff:/var/lib/docker/overlay2/4cb50a0a67380109d348cd3005e5b855fceaf243cf5b0130df8952ed58e6c56c/diff:/var/lib/docker/overlay2/1fce572a3789e30bd91fd684a3bc2cae58743b3718b1d078378158f22156795e/diff:/var/lib/docker/overlay2/2bb28738c8f2de75a3da83169e8b29e28c57bf73908fe80dbca06551ac39d459/diff:/var/lib/docker/overlay2/70c1f9c120af3acda7bbd97c063aeed205a47f16b2818b7a2c4e5cfa2e3321bf/diff:/var/lib/docker/overlay2/84cfc718f71abd3da77845f467dceaeceb62953d1f92e9cb2d966b19d2e
9a733/diff:/var/lib/docker/overlay2/8d6f862f75e903072cefca0f974c925dc5946ac5bf7bcb923adecf23cdb3d454/diff:/var/lib/docker/overlay2/778af97f4ec3a1e9ceed247958939b375c3209058ee649ac0231b3ccf59c0e5d/diff:/var/lib/docker/overlay2/c0e0a5b57f41ef9ddf67d67f928bcbbd060abb8aa3ec732c9ee48b3d5ce723a2/diff:/var/lib/docker/overlay2/f4bc2ed173f4985e492d89df0a08aa6017952a9ac37054537d57bb7589c1560e/diff:/var/lib/docker/overlay2/562d496753ef0c1e8279787dfdb7cb4d6e8cfbd0eaf79a1f9dc3fd10916330b5/diff:/var/lib/docker/overlay2/717fb77b4f16514e3bd496845adfe213bd63609053b43f6d800e6757197f0f04/diff:/var/lib/docker/overlay2/4e8d84337665652419a5a40f908d308087c202f55b785114c1e38be84a17eca7/diff:/var/lib/docker/overlay2/5b34f3b4b29c9f9ab991b524096584bbf01d14e9d8d4b7786bda6d28241999e8/diff:/var/lib/docker/overlay2/49e6c28c6a50420d2f8e2a3d3278d425495086d9478a7ece39dd989925949a5d/diff:/var/lib/docker/overlay2/86c1534e0117ca4e106fa3c177c4f1b2d85e37b9d2a5dceeb007afff1721713e/diff:/var/lib/docker/overlay2/c5013a5641f131cadca99884c2ae5b785bfae4
a079463490ea0cd215cd884d43/diff:/var/lib/docker/overlay2/f61ccdb261987275521111370c06a14baf102e5047e24281f278eaaee820a410/diff:/var/lib/docker/overlay2/46838e2b0c3f67b4bfda29963d76e2c8babbd54904a4a6f5745e924a73437c2d/diff:/var/lib/docker/overlay2/16180439a4d3ee12ff794b26cbfd692186d7785b4c6f33c8c57416535667c54e/diff",
"MergedDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3/merged",
"UpperDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3/diff",
"WorkDir": "/var/lib/docker/overlay2/b3d0a0d0fc7f35955d553b9e7fe10b935e729813adb1ca16157f721db2aeccf3/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "old-k8s-version-20220325015306-262786",
"Source": "/var/lib/docker/volumes/old-k8s-version-20220325015306-262786/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "old-k8s-version-20220325015306-262786",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-20220325015306-262786",
"name.minikube.sigs.k8s.io": "old-k8s-version-20220325015306-262786",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "44b9519d0b55a0dbe9bc349c627da03ca1d456aab29fe1f9cc6fbe902a60b4e0",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49539"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49538"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49535"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49537"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49536"
}
]
},
"SandboxKey": "/var/run/docker/netns/44b9519d0b55",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-20220325015306-262786": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": [
"e6a4c0e8f4c7",
"old-k8s-version-20220325015306-262786"
],
"NetworkID": "739cf1dc095b5d758dfcb21f6f999d4a170c6b33046de4a26204586f05d2d4a4",
"EndpointID": "f17636c1e1855543cb0356e0ced5eac0102a5fed579cb886a1c3e850498bc7d7",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:4c:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:240: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220325015306-262786 -n old-k8s-version-20220325015306-262786
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: minikube logs <======
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 -p old-k8s-version-20220325015306-262786 logs -n 25
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/FirstStart logs:
-- stdout --
*
* ==> Audit <==
* |---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
| start | -p | missing-upgrade-20220325014930-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:50:33 UTC | Fri, 25 Mar 2022 01:51:18 UTC |
| | missing-upgrade-20220325014930-262786 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | missing-upgrade-20220325014930-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:51:18 UTC | Fri, 25 Mar 2022 01:51:21 UTC |
| | missing-upgrade-20220325014930-262786 | | | | | |
| start | -p pause-20220325015121-262786 | pause-20220325015121-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:51:21 UTC | Fri, 25 Mar 2022 01:52:32 UTC |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=all --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p pause-20220325015121-262786 | pause-20220325015121-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:52:32 UTC | Fri, 25 Mar 2022 01:52:47 UTC |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| pause | -p pause-20220325015121-262786 | pause-20220325015121-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:52:48 UTC | Fri, 25 Mar 2022 01:52:48 UTC |
| | --alsologtostderr -v=5 | | | | | |
| unpause | -p pause-20220325015121-262786 | pause-20220325015121-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:52:49 UTC | Fri, 25 Mar 2022 01:52:49 UTC |
| | --alsologtostderr -v=5 | | | | | |
| start | -p | kubernetes-upgrade-20220325015003-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:51:06 UTC | Fri, 25 Mar 2022 01:52:50 UTC |
| | kubernetes-upgrade-20220325015003-262786 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.23.4-rc.0 | | | | | |
| | --alsologtostderr -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| start | -p | cert-expiration-20220325014851-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:52:36 UTC | Fri, 25 Mar 2022 01:52:51 UTC |
| | cert-expiration-20220325014851-262786 | | | | | |
| | --memory=2048 --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | cert-expiration-20220325014851-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:52:51 UTC | Fri, 25 Mar 2022 01:52:54 UTC |
| | cert-expiration-20220325014851-262786 | | | | | |
| pause | -p pause-20220325015121-262786 | pause-20220325015121-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:52:49 UTC | Fri, 25 Mar 2022 01:52:55 UTC |
| | --alsologtostderr -v=5 | | | | | |
| delete | -p pause-20220325015121-262786 | pause-20220325015121-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:52:55 UTC | Fri, 25 Mar 2022 01:53:05 UTC |
| | --alsologtostderr -v=5 | | | | | |
| start | -p | kubernetes-upgrade-20220325015003-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:52:50 UTC | Fri, 25 Mar 2022 01:53:05 UTC |
| | kubernetes-upgrade-20220325015003-262786 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.23.4-rc.0 | | | | | |
| | --alsologtostderr -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| profile | list --output json | minikube | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:53:05 UTC | Fri, 25 Mar 2022 01:53:05 UTC |
| delete | -p pause-20220325015121-262786 | pause-20220325015121-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:53:06 UTC | Fri, 25 Mar 2022 01:53:06 UTC |
| delete | -p | kubernetes-upgrade-20220325015003-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:53:05 UTC | Fri, 25 Mar 2022 01:53:09 UTC |
| | kubernetes-upgrade-20220325015003-262786 | | | | | |
| start | -p auto-20220325014919-262786 | auto-20220325014919-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:52:54 UTC | Fri, 25 Mar 2022 01:53:54 UTC |
| | --memory=2048 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | -p auto-20220325014919-262786 | auto-20220325014919-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:53:54 UTC | Fri, 25 Mar 2022 01:53:54 UTC |
| | pgrep -a kubelet | | | | | |
| delete | -p auto-20220325014919-262786 | auto-20220325014919-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:54:05 UTC | Fri, 25 Mar 2022 01:54:08 UTC |
| start | -p | running-upgrade-20220325014921-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:53:37 UTC | Fri, 25 Mar 2022 01:54:11 UTC |
| | running-upgrade-20220325014921-262786 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p | running-upgrade-20220325014921-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:54:11 UTC | Fri, 25 Mar 2022 01:54:22 UTC |
| | running-upgrade-20220325014921-262786 | | | | | |
| start | -p | cilium-20220325014921-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:53:09 UTC | Fri, 25 Mar 2022 01:54:40 UTC |
| | cilium-20220325014921-262786 | | | | | |
| | --memory=2048 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --cni=cilium --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | -p | cilium-20220325014921-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:54:45 UTC | Fri, 25 Mar 2022 01:54:45 UTC |
| | cilium-20220325014921-262786 | | | | | |
| | pgrep -a kubelet | | | | | |
| delete | -p | cilium-20220325014921-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:54:57 UTC | Fri, 25 Mar 2022 01:55:00 UTC |
| | cilium-20220325014921-262786 | | | | | |
| start | -p | kindnet-20220325014920-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:55:00 UTC | Fri, 25 Mar 2022 01:56:12 UTC |
| | kindnet-20220325014920-262786 | | | | | |
| | --memory=2048 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --cni=kindnet --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | -p | kindnet-20220325014920-262786 | jenkins | v1.25.2 | Fri, 25 Mar 2022 01:56:17 UTC | Fri, 25 Mar 2022 01:56:17 UTC |
| | kindnet-20220325014920-262786 | | | | | |
| | pgrep -a kubelet | | | | | |
|---------|------------------------------------------|------------------------------------------|---------|---------|-------------------------------|-------------------------------|
*
* ==> Last Start <==
* Log file created at: 2022/03/25 01:55:00
Running on machine: ubuntu-20-agent-13
Binary: Built with gc go1.17.7 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0325 01:55:00.714916 449514 out.go:297] Setting OutFile to fd 1 ...
I0325 01:55:00.715078 449514 out.go:344] TERM=,COLORTERM=, which probably does not support color
I0325 01:55:00.715089 449514 out.go:310] Setting ErrFile to fd 2...
I0325 01:55:00.715093 449514 out.go:344] TERM=,COLORTERM=, which probably does not support color
I0325 01:55:00.715209 449514 root.go:315] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
I0325 01:55:00.715486 449514 out.go:304] Setting JSON to false
I0325 01:55:00.717003 449514 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":16373,"bootTime":1648156928,"procs":672,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0325 01:55:00.717080 449514 start.go:125] virtualization: kvm guest
I0325 01:55:00.720007 449514 out.go:176] * [kindnet-20220325014920-262786] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
I0325 01:55:00.721607 449514 out.go:176] - MINIKUBE_LOCATION=13812
I0325 01:55:00.720213 449514 notify.go:193] Checking for updates...
I0325 01:55:00.723033 449514 out.go:176] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0325 01:55:00.724516 449514 out.go:176] - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
I0325 01:55:00.725923 449514 out.go:176] - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
I0325 01:55:00.727325 449514 out.go:176] - MINIKUBE_BIN=out/minikube-linux-amd64
I0325 01:55:00.727846 449514 config.go:176] Loaded profile config "calico-20220325014921-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
I0325 01:55:00.727961 449514 config.go:176] Loaded profile config "custom-weave-20220325014921-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
I0325 01:55:00.728092 449514 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
I0325 01:55:00.728148 449514 driver.go:346] Setting default libvirt URI to qemu:///system
I0325 01:55:00.774853 449514 docker.go:136] docker version: linux-20.10.14
I0325 01:55:00.775028 449514 cli_runner.go:133] Run: docker system info --format "{{json .}}"
I0325 01:55:00.877241 449514 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-25 01:55:00.807852165 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0325 01:55:00.877362 449514 docker.go:253] overlay module found
I0325 01:55:00.879932 449514 out.go:176] * Using the docker driver based on user configuration
I0325 01:55:00.879963 449514 start.go:284] selected driver: docker
I0325 01:55:00.879968 449514 start.go:801] validating driver "docker" against <nil>
I0325 01:55:00.879986 449514 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
W0325 01:55:00.880043 449514 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W0325 01:55:00.880063 449514 out.go:241] ! Your cgroup does not allow setting memory.
I0325 01:55:00.881696 449514 out.go:176] - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
I0325 01:55:00.882284 449514 cli_runner.go:133] Run: docker system info --format "{{json .}}"
I0325 01:55:00.986213 449514 info.go:263] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-25 01:55:00.913247272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1021-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662795776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0325 01:55:00.986345 449514 start_flags.go:290] no existing cluster config was found, will generate one from the flags
I0325 01:55:00.986546 449514 start_flags.go:834] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0325 01:55:00.986577 449514 cni.go:93] Creating CNI manager for "kindnet"
I0325 01:55:00.986588 449514 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I0325 01:55:00.986599 449514 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
I0325 01:55:00.986604 449514 start_flags.go:299] Found "CNI" CNI - setting NetworkPlugin=cni
I0325 01:55:00.986614 449514 start_flags.go:304] config:
{Name:kindnet-20220325014920-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:kindnet-20220325014920-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0325 01:55:00.989926 449514 out.go:176] * Starting control plane node kindnet-20220325014920-262786 in cluster kindnet-20220325014920-262786
I0325 01:55:00.989961 449514 cache.go:120] Beginning downloading kic base image for docker with containerd
I0325 01:55:00.991465 449514 out.go:176] * Pulling base image ...
I0325 01:55:00.991495 449514 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
I0325 01:55:00.991520 449514 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4
I0325 01:55:00.991532 449514 cache.go:57] Caching tarball of preloaded images
I0325 01:55:00.991588 449514 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
I0325 01:55:00.991753 449514 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
I0325 01:55:00.991772 449514 cache.go:60] Finished verifying existence of preloaded tar for v1.23.3 on containerd
I0325 01:55:00.991875 449514 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/config.json ...
I0325 01:55:00.991911 449514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/config.json: {Name:mk363c00d135004479b2648b7f626008aacd2fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:55:01.026713 449514 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
I0325 01:55:01.026749 449514 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
I0325 01:55:01.026766 449514 cache.go:208] Successfully downloaded all kic artifacts
I0325 01:55:01.026808 449514 start.go:348] acquiring machines lock for kindnet-20220325014920-262786: {Name:mka5ea64952550618d6576e44be996cc56d8d8bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0325 01:55:01.026939 449514 start.go:352] acquired machines lock for "kindnet-20220325014920-262786" in 109.57µs
I0325 01:55:01.026980 449514 start.go:90] Provisioning new machine with config: &{Name:kindnet-20220325014920-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:kindnet-20220325014920-262786 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0325 01:55:01.027079 449514 start.go:127] createHost starting for "" (driver="docker")
I0325 01:54:57.236361 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:00.275444 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:54:58.973463 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:01.450813 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:04.932050 442784 out.go:203] - Generating certificates and keys ...
I0325 01:55:04.936166 442784 out.go:203] - Booting up control plane ...
I0325 01:55:04.939871 442784 out.go:203] - Configuring RBAC rules ...
I0325 01:55:04.942250 442784 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
I0325 01:55:01.029656 449514 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
I0325 01:55:01.029913 449514 start.go:161] libmachine.API.Create for "kindnet-20220325014920-262786" (driver="docker")
I0325 01:55:01.029948 449514 client.go:168] LocalClient.Create starting
I0325 01:55:01.030015 449514 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
I0325 01:55:01.030070 449514 main.go:130] libmachine: Decoding PEM data...
I0325 01:55:01.030087 449514 main.go:130] libmachine: Parsing certificate...
I0325 01:55:01.030122 449514 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
I0325 01:55:01.030139 449514 main.go:130] libmachine: Decoding PEM data...
I0325 01:55:01.030147 449514 main.go:130] libmachine: Parsing certificate...
I0325 01:55:01.030465 449514 cli_runner.go:133] Run: docker network inspect kindnet-20220325014920-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0325 01:55:01.062523 449514 cli_runner.go:180] docker network inspect kindnet-20220325014920-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0325 01:55:01.062661 449514 network_create.go:254] running [docker network inspect kindnet-20220325014920-262786] to gather additional debugging logs...
I0325 01:55:01.062707 449514 cli_runner.go:133] Run: docker network inspect kindnet-20220325014920-262786
W0325 01:55:01.097010 449514 cli_runner.go:180] docker network inspect kindnet-20220325014920-262786 returned with exit code 1
I0325 01:55:01.097062 449514 network_create.go:257] error running [docker network inspect kindnet-20220325014920-262786]: docker network inspect kindnet-20220325014920-262786: exit status 1
stdout:
[]
stderr:
Error: No such network: kindnet-20220325014920-262786
I0325 01:55:01.097128 449514 network_create.go:259] output of [docker network inspect kindnet-20220325014920-262786]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: kindnet-20220325014920-262786
** /stderr **
I0325 01:55:01.097193 449514 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0325 01:55:01.135618 449514 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-fcb21d43dbbf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:db:45:ae:c5}}
I0325 01:55:01.136762 449514 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc0005880b0] misses:0}
I0325 01:55:01.136837 449514 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0325 01:55:01.136861 449514 network_create.go:106] attempt to create docker network kindnet-20220325014920-262786 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0325 01:55:01.136925 449514 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220325014920-262786
I0325 01:55:01.205843 449514 network_create.go:90] docker network kindnet-20220325014920-262786 192.168.58.0/24 created
I0325 01:55:01.205880 449514 kic.go:106] calculated static IP "192.168.58.2" for the "kindnet-20220325014920-262786" container
I0325 01:55:01.205973 449514 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
I0325 01:55:01.239559 449514 cli_runner.go:133] Run: docker volume create kindnet-20220325014920-262786 --label name.minikube.sigs.k8s.io=kindnet-20220325014920-262786 --label created_by.minikube.sigs.k8s.io=true
I0325 01:55:01.271714 449514 oci.go:102] Successfully created a docker volume kindnet-20220325014920-262786
I0325 01:55:01.271799 449514 cli_runner.go:133] Run: docker run --rm --name kindnet-20220325014920-262786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220325014920-262786 --entrypoint /usr/bin/test -v kindnet-20220325014920-262786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
I0325 01:55:01.850347 449514 oci.go:106] Successfully prepared a docker volume kindnet-20220325014920-262786
I0325 01:55:01.850392 449514 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
I0325 01:55:01.850418 449514 kic.go:179] Starting extracting preloaded images to volume ...
I0325 01:55:01.850497 449514 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220325014920-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
I0325 01:55:03.334100 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:06.368209 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:04.944398 442784 out.go:176] * Configuring testdata/weavenet.yaml (Container Networking Interface) ...
I0325 01:55:04.944466 442784 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
I0325 01:55:04.944515 442784 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml
I0325 01:55:04.948418 442784 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/tmp/minikube/cni.yaml': No such file or directory
I0325 01:55:04.948452 442784 ssh_runner.go:362] scp testdata/weavenet.yaml --> /var/tmp/minikube/cni.yaml (10948 bytes)
I0325 01:55:04.974787 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0325 01:55:05.905700 442784 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0325 01:55:05.905784 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:05.905796 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=custom-weave-20220325014921-262786 minikube.k8s.io/updated_at=2022_03_25T01_55_05_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:05.913279 442784 ops.go:34] apiserver oom_adj: -16
I0325 01:55:06.350587 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:06.946231 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:07.446253 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:03.951437 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:06.450673 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:09.407114 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:07.946635 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:08.446097 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:08.945803 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:09.446739 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:09.946451 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:10.445869 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:10.945907 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:11.446001 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:11.946712 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:12.446459 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:08.791543 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:10.950007 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:12.950870 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:12.256742 449514 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220325014920-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (10.406200607s)
I0325 01:55:12.256780 449514 kic.go:188] duration metric: took 10.406356 seconds to extract preloaded images to volume
W0325 01:55:12.256859 449514 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0325 01:55:12.256876 449514 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0325 01:55:12.256928 449514 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
I0325 01:55:12.350466 449514 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20220325014920-262786 --name kindnet-20220325014920-262786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220325014920-262786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20220325014920-262786 --network kindnet-20220325014920-262786 --ip 192.168.58.2 --volume kindnet-20220325014920-262786:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
I0325 01:55:12.757384 449514 cli_runner.go:133] Run: docker container inspect kindnet-20220325014920-262786 --format={{.State.Running}}
I0325 01:55:12.792632 449514 cli_runner.go:133] Run: docker container inspect kindnet-20220325014920-262786 --format={{.State.Status}}
I0325 01:55:12.825757 449514 cli_runner.go:133] Run: docker exec kindnet-20220325014920-262786 stat /var/lib/dpkg/alternatives/iptables
I0325 01:55:12.888453 449514 oci.go:281] the created container "kindnet-20220325014920-262786" has a running status.
I0325 01:55:12.888494 449514 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220325014920-262786/id_rsa...
I0325 01:55:13.118673 449514 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220325014920-262786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0325 01:55:13.204377 449514 cli_runner.go:133] Run: docker container inspect kindnet-20220325014920-262786 --format={{.State.Status}}
I0325 01:55:13.238141 449514 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0325 01:55:13.238171 449514 kic_runner.go:114] Args: [docker exec --privileged kindnet-20220325014920-262786 chown docker:docker /home/docker/.ssh/authorized_keys]
I0325 01:55:13.326830 449514 cli_runner.go:133] Run: docker container inspect kindnet-20220325014920-262786 --format={{.State.Status}}
I0325 01:55:13.361589 449514 machine.go:88] provisioning docker machine ...
I0325 01:55:13.361634 449514 ubuntu.go:169] provisioning hostname "kindnet-20220325014920-262786"
I0325 01:55:13.361706 449514 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220325014920-262786
I0325 01:55:13.391728 449514 main.go:130] libmachine: Using SSH client type: native
I0325 01:55:13.391972 449514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil> [] 0s} 127.0.0.1 49534 <nil> <nil>}
I0325 01:55:13.391997 449514 main.go:130] libmachine: About to run SSH command:
sudo hostname kindnet-20220325014920-262786 && echo "kindnet-20220325014920-262786" | sudo tee /etc/hostname
I0325 01:55:13.521224 449514 main.go:130] libmachine: SSH cmd err, output: <nil>: kindnet-20220325014920-262786
I0325 01:55:13.521306 449514 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220325014920-262786
I0325 01:55:13.552907 449514 main.go:130] libmachine: Using SSH client type: native
I0325 01:55:13.553068 449514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil> [] 0s} 127.0.0.1 49534 <nil> <nil>}
I0325 01:55:13.553097 449514 main.go:130] libmachine: About to run SSH command:
if ! grep -xq '.*\skindnet-20220325014920-262786' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-20220325014920-262786/g' /etc/hosts;
else
echo '127.0.1.1 kindnet-20220325014920-262786' | sudo tee -a /etc/hosts;
fi
fi
I0325 01:55:13.670815 449514 main.go:130] libmachine: SSH cmd err, output: <nil>:
I0325 01:55:13.670849 449514 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
I0325 01:55:13.670880 449514 ubuntu.go:177] setting up certificates
I0325 01:55:13.670894 449514 provision.go:83] configureAuth start
I0325 01:55:13.670975 449514 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220325014920-262786
I0325 01:55:13.702037 449514 provision.go:138] copyHostCerts
I0325 01:55:13.702103 449514 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
I0325 01:55:13.702115 449514 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
I0325 01:55:13.702173 449514 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
I0325 01:55:13.702265 449514 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
I0325 01:55:13.702275 449514 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
I0325 01:55:13.702300 449514 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
I0325 01:55:13.702357 449514 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
I0325 01:55:13.702364 449514 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
I0325 01:55:13.702384 449514 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
I0325 01:55:13.702428 449514 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.kindnet-20220325014920-262786 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-20220325014920-262786]
I0325 01:55:13.877542 449514 provision.go:172] copyRemoteCerts
I0325 01:55:13.877598 449514 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0325 01:55:13.877633 449514 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220325014920-262786
I0325 01:55:13.910635 449514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49534 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220325014920-262786/id_rsa Username:docker}
I0325 01:55:13.998738 449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0325 01:55:14.016410 449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
I0325 01:55:14.033193 449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0325 01:55:14.049802 449514 provision.go:86] duration metric: configureAuth took 378.8914ms
I0325 01:55:14.049826 449514 ubuntu.go:193] setting minikube options for container-runtime
I0325 01:55:14.050001 449514 config.go:176] Loaded profile config "kindnet-20220325014920-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
I0325 01:55:14.050015 449514 machine.go:91] provisioned docker machine in 688.400649ms
I0325 01:55:14.050021 449514 client.go:171] LocalClient.Create took 13.020061955s
I0325 01:55:14.050037 449514 start.go:169] duration metric: libmachine.API.Create for "kindnet-20220325014920-262786" took 13.020125504s
I0325 01:55:14.050044 449514 start.go:302] post-start starting for "kindnet-20220325014920-262786" (driver="docker")
I0325 01:55:14.050050 449514 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0325 01:55:14.050113 449514 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0325 01:55:14.050160 449514 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220325014920-262786
I0325 01:55:14.083492 449514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49534 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220325014920-262786/id_rsa Username:docker}
I0325 01:55:14.174183 449514 ssh_runner.go:195] Run: cat /etc/os-release
I0325 01:55:14.176870 449514 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0325 01:55:14.176891 449514 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0325 01:55:14.176901 449514 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0325 01:55:14.176908 449514 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0325 01:55:14.176918 449514 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
I0325 01:55:14.176964 449514 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
I0325 01:55:14.177026 449514 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
I0325 01:55:14.177106 449514 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0325 01:55:14.183578 449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
I0325 01:55:14.200482 449514 start.go:305] post-start completed in 150.428094ms
I0325 01:55:14.200771 449514 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220325014920-262786
I0325 01:55:14.232519 449514 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/config.json ...
I0325 01:55:14.232743 449514 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0325 01:55:14.232798 449514 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220325014920-262786
I0325 01:55:14.266097 449514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49534 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220325014920-262786/id_rsa Username:docker}
I0325 01:55:14.350928 449514 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0325 01:55:14.354518 449514 start.go:130] duration metric: createHost completed in 13.327425814s
I0325 01:55:14.354543 449514 start.go:81] releasing machines lock for "kindnet-20220325014920-262786", held for 13.327579886s
I0325 01:55:14.354616 449514 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220325014920-262786
I0325 01:55:14.384844 449514 ssh_runner.go:195] Run: systemctl --version
I0325 01:55:14.384877 449514 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0325 01:55:14.384893 449514 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220325014920-262786
I0325 01:55:14.384926 449514 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220325014920-262786
I0325 01:55:14.416087 449514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49534 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220325014920-262786/id_rsa Username:docker}
I0325 01:55:14.417099 449514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49534 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220325014920-262786/id_rsa Username:docker}
I0325 01:55:14.518639 449514 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0325 01:55:14.529395 449514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0325 01:55:14.537892 449514 docker.go:183] disabling docker service ...
I0325 01:55:14.537955 449514 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0325 01:55:14.553167 449514 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0325 01:55:14.561708 449514 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0325 01:55:14.639668 449514 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0325 01:55:14.717986 449514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0325 01:55:14.727173 449514 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0325 01:55:14.739685 449514 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My42IgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
I0325 01:55:14.752662 449514 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0325 01:55:14.758738 449514 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0325 01:55:14.764818 449514 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0325 01:55:14.834079 449514 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0325 01:55:14.897100 449514 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
I0325 01:55:14.897174 449514 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0325 01:55:14.901028 449514 start.go:462] Will wait 60s for crictl version
I0325 01:55:14.901085 449514 ssh_runner.go:195] Run: sudo crictl version
I0325 01:55:14.923419 449514 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:
stderr:
time="2022-03-25T01:55:14Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
I0325 01:55:12.439881 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:15.478625 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:12.945864 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:13.445997 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:13.945906 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:14.446100 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:14.946258 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:15.446416 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:15.946743 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:16.445790 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:16.946757 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:17.445866 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:14.951063 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:17.449838 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:17.945939 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:18.446030 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:18.946679 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:19.445913 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:19.946715 442784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:20.003561 442784 kubeadm.go:1020] duration metric: took 14.097833739s to wait for elevateKubeSystemPrivileges.
I0325 01:55:20.003591 442784 kubeadm.go:393] StartCluster complete in 31.384120335s
I0325 01:55:20.003609 442784 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:55:20.003709 442784 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
I0325 01:55:20.004656 442784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:55:20.519233 442784 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "custom-weave-20220325014921-262786" rescaled to 1
I0325 01:55:20.519302 442784 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0325 01:55:20.521500 442784 out.go:176] * Verifying Kubernetes components...
I0325 01:55:20.519380 442784 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0325 01:55:20.521565 442784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0325 01:55:20.519622 442784 config.go:176] Loaded profile config "custom-weave-20220325014921-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
I0325 01:55:20.519397 442784 addons.go:415] enableAddons start: toEnable=map[], additional=[]
I0325 01:55:20.521700 442784 addons.go:65] Setting storage-provisioner=true in profile "custom-weave-20220325014921-262786"
I0325 01:55:20.521715 442784 addons.go:153] Setting addon storage-provisioner=true in "custom-weave-20220325014921-262786"
W0325 01:55:20.521720 442784 addons.go:165] addon storage-provisioner should already be in state true
I0325 01:55:20.521747 442784 host.go:66] Checking if "custom-weave-20220325014921-262786" exists ...
I0325 01:55:20.522052 442784 addons.go:65] Setting default-storageclass=true in profile "custom-weave-20220325014921-262786"
I0325 01:55:20.522077 442784 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-weave-20220325014921-262786"
I0325 01:55:20.522294 442784 cli_runner.go:133] Run: docker container inspect custom-weave-20220325014921-262786 --format={{.State.Status}}
I0325 01:55:20.522407 442784 cli_runner.go:133] Run: docker container inspect custom-weave-20220325014921-262786 --format={{.State.Status}}
I0325 01:55:20.564684 442784 addons.go:153] Setting addon default-storageclass=true in "custom-weave-20220325014921-262786"
W0325 01:55:20.564711 442784 addons.go:165] addon default-storageclass should already be in state true
I0325 01:55:20.564734 442784 host.go:66] Checking if "custom-weave-20220325014921-262786" exists ...
I0325 01:55:20.565142 442784 cli_runner.go:133] Run: docker container inspect custom-weave-20220325014921-262786 --format={{.State.Status}}
I0325 01:55:20.567523 442784 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0325 01:55:20.567643 442784 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0325 01:55:20.567660 442784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0325 01:55:20.567697 442784 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220325014921-262786
I0325 01:55:20.601389 442784 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
I0325 01:55:20.601417 442784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0325 01:55:20.601486 442784 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220325014921-262786
I0325 01:55:20.603466 442784 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.67.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0325 01:55:20.603729 442784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49529 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/custom-weave-20220325014921-262786/id_rsa Username:docker}
I0325 01:55:20.604592 442784 node_ready.go:35] waiting up to 5m0s for node "custom-weave-20220325014921-262786" to be "Ready" ...
I0325 01:55:20.607815 442784 node_ready.go:49] node "custom-weave-20220325014921-262786" has status "Ready":"True"
I0325 01:55:20.607832 442784 node_ready.go:38] duration metric: took 3.210454ms waiting for node "custom-weave-20220325014921-262786" to be "Ready" ...
I0325 01:55:20.607840 442784 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0325 01:55:20.616184 442784 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-qsk2c" in "kube-system" namespace to be "Ready" ...
I0325 01:55:20.644527 442784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49529 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/custom-weave-20220325014921-262786/id_rsa Username:docker}
I0325 01:55:20.708612 442784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0325 01:55:20.800646 442784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0325 01:55:20.994686 442784 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
I0325 01:55:18.513302 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:21.545857 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:21.217641 442784 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
I0325 01:55:21.217669 442784 addons.go:417] enableAddons completed in 698.281954ms
I0325 01:55:19.449885 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:21.450689 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:23.450820 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:25.971042 449514 ssh_runner.go:195] Run: sudo crictl version
I0325 01:55:25.993240 449514 start.go:471] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.5.10
RuntimeApiVersion: v1alpha2
I0325 01:55:25.993303 449514 ssh_runner.go:195] Run: containerd --version
I0325 01:55:26.013085 449514 ssh_runner.go:195] Run: containerd --version
I0325 01:55:26.035628 449514 out.go:176] * Preparing Kubernetes v1.23.3 on containerd 1.5.10 ...
I0325 01:55:26.035700 449514 cli_runner.go:133] Run: docker network inspect kindnet-20220325014920-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0325 01:55:26.065833 449514 ssh_runner.go:195] Run: grep 192.168.58.1 host.minikube.internal$ /etc/hosts
I0325 01:55:26.069174 449514 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0325 01:55:24.580164 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:22.627827 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:25.126871 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:27.128372 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:25.949935 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:27.951276 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:26.081213 449514 out.go:176] - kubelet.cni-conf-dir=/etc/cni/net.mk
I0325 01:55:26.081289 449514 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime containerd
I0325 01:55:26.081340 449514 ssh_runner.go:195] Run: sudo crictl images --output json
I0325 01:55:26.104340 449514 containerd.go:612] all images are preloaded for containerd runtime.
I0325 01:55:26.104359 449514 containerd.go:526] Images already preloaded, skipping extraction
I0325 01:55:26.104399 449514 ssh_runner.go:195] Run: sudo crictl images --output json
I0325 01:55:26.126228 449514 containerd.go:612] all images are preloaded for containerd runtime.
I0325 01:55:26.126252 449514 cache_images.go:84] Images are preloaded, skipping loading
I0325 01:55:26.126297 449514 ssh_runner.go:195] Run: sudo crictl info
I0325 01:55:26.148746 449514 cni.go:93] Creating CNI manager for "kindnet"
I0325 01:55:26.148784 449514 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0325 01:55:26.148799 449514 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-20220325014920-262786 NodeName:kindnet-20220325014920-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0325 01:55:26.148909 449514 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.58.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "kindnet-20220325014920-262786"
kubeletExtraArgs:
node-ip: 192.168.58.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.23.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0325 01:55:26.149005 449514 kubeadm.go:936] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.23.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kindnet-20220325014920-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.23.3 ClusterName:kindnet-20220325014920-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
I0325 01:55:26.149052 449514 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.3
I0325 01:55:26.155854 449514 binaries.go:44] Found k8s binaries, skipping transfer
I0325 01:55:26.155931 449514 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0325 01:55:26.162631 449514 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (574 bytes)
I0325 01:55:26.174566 449514 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0325 01:55:26.186489 449514 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2058 bytes)
I0325 01:55:26.197829 449514 ssh_runner.go:195] Run: grep 192.168.58.2 control-plane.minikube.internal$ /etc/hosts
I0325 01:55:26.200723 449514 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0325 01:55:26.209968 449514 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786 for IP: 192.168.58.2
I0325 01:55:26.210075 449514 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
I0325 01:55:26.210119 449514 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
I0325 01:55:26.210173 449514 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.key
I0325 01:55:26.210194 449514 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt with IP's: []
I0325 01:55:26.417902 449514 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt ...
I0325 01:55:26.417934 449514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.crt: {Name:mke482e0d3615d15f8a0e1ec3f80257bfc97c4f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:55:26.418143 449514 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.key ...
I0325 01:55:26.418162 449514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/client.key: {Name:mkff32edfcc4c2eb707360d40e7c1afa06b2c230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:55:26.418268 449514 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.key.cee25041
I0325 01:55:26.418285 449514 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0325 01:55:26.493489 449514 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.crt.cee25041 ...
I0325 01:55:26.493515 449514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.crt.cee25041: {Name:mk14b16498dbc281b3740ba71b4be7d62b8bbe5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:55:26.493692 449514 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.key.cee25041 ...
I0325 01:55:26.493709 449514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.key.cee25041: {Name:mk3793378151f91dd4d340d5bd722f9a7a907533 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:55:26.493822 449514 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.crt
I0325 01:55:26.493884 449514 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.key
I0325 01:55:26.493934 449514 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/proxy-client.key
I0325 01:55:26.493947 449514 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/proxy-client.crt with IP's: []
I0325 01:55:26.560393 449514 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/proxy-client.crt ...
I0325 01:55:26.560416 449514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/proxy-client.crt: {Name:mk572877cd2b71a469f8f7fe55a734caed58088c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:55:26.560613 449514 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/proxy-client.key ...
I0325 01:55:26.560632 449514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/proxy-client.key: {Name:mk8b789e0b8f5b106866cbdd103abe48fba916ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:55:26.560833 449514 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
W0325 01:55:26.560869 449514 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
I0325 01:55:26.560882 449514 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
I0325 01:55:26.560906 449514 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
I0325 01:55:26.560931 449514 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
I0325 01:55:26.560953 449514 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
I0325 01:55:26.560998 449514 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
I0325 01:55:26.561499 449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0325 01:55:26.578805 449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0325 01:55:26.596151 449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0325 01:55:26.612490 449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220325014920-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0325 01:55:26.629157 449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0325 01:55:26.645606 449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0325 01:55:26.661763 449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0325 01:55:26.677567 449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0325 01:55:26.693482 449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
I0325 01:55:26.709418 449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0325 01:55:26.725313 449514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
I0325 01:55:26.741234 449514 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0325 01:55:26.753762 449514 ssh_runner.go:195] Run: openssl version
I0325 01:55:26.758450 449514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
I0325 01:55:26.765176 449514 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
I0325 01:55:26.768085 449514 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
I0325 01:55:26.768131 449514 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
I0325 01:55:26.772576 449514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
I0325 01:55:26.779292 449514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0325 01:55:26.786531 449514 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0325 01:55:26.789301 449514 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
I0325 01:55:26.789346 449514 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0325 01:55:26.793813 449514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0325 01:55:26.800578 449514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
I0325 01:55:26.807538 449514 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
I0325 01:55:26.810562 449514 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
I0325 01:55:26.810598 449514 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
I0325 01:55:26.815393 449514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
I0325 01:55:26.822494 449514 kubeadm.go:391] StartCluster: {Name:kindnet-20220325014920-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:kindnet-20220325014920-262786 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0325 01:55:26.822582 449514 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0325 01:55:26.822646 449514 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0325 01:55:26.846017 449514 cri.go:87] found id: ""
I0325 01:55:26.846098 449514 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0325 01:55:26.853346 449514 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0325 01:55:26.860032 449514 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0325 01:55:26.860085 449514 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0325 01:55:26.866535 449514 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0325 01:55:26.866603 449514 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0325 01:55:27.612926 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:30.649664 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:29.128431 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:31.627540 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:30.450113 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:32.950401 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:33.683100 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:36.715325 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:34.127415 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:36.128099 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:35.450311 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:37.950414 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:39.751091 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:42.288097 449514 out.go:203] - Generating certificates and keys ...
I0325 01:55:42.291193 449514 out.go:203] - Booting up control plane ...
I0325 01:55:42.294032 449514 out.go:203] - Configuring RBAC rules ...
I0325 01:55:42.295734 449514 cni.go:93] Creating CNI manager for "kindnet"
I0325 01:55:38.627308 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:40.627890 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:39.950851 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:42.450079 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:42.297463 449514 out.go:176] * Configuring CNI (Container Networking Interface) ...
I0325 01:55:42.297522 449514 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0325 01:55:42.300986 449514 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.3/kubectl ...
I0325 01:55:42.301002 449514 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0325 01:55:42.313726 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0325 01:55:43.040679 449514 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0325 01:55:43.040759 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:43.040761 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=kindnet-20220325014920-262786 minikube.k8s.io/updated_at=2022_03_25T01_55_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:43.047379 449514 ops.go:34] apiserver oom_adj: -16
I0325 01:55:43.109381 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:43.663589 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:44.163108 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:44.663068 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:45.163910 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:45.663089 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:42.785301 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:45.821589 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:43.127776 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:45.128206 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:44.949971 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:46.950388 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:46.163118 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:46.663747 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:47.163779 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:47.663912 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:48.163060 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:48.663802 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:49.163741 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:49.663849 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:50.163061 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:50.663061 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:48.854113 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:47.626880 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:49.627246 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:51.628191 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:49.450803 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:51.950005 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:51.163949 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:51.663251 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:52.163124 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:52.663755 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:53.163330 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:53.663117 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:54.163827 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:54.663886 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:55.164040 449514 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:55:55.230421 449514 kubeadm.go:1020] duration metric: took 12.189718321s to wait for elevateKubeSystemPrivileges.
I0325 01:55:55.230459 449514 kubeadm.go:393] StartCluster complete in 28.407973168s
I0325 01:55:55.230497 449514 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:55:55.230587 449514 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
I0325 01:55:55.231954 449514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:55:55.747834 449514 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20220325014920-262786" rescaled to 1
I0325 01:55:55.747892 449514 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0325 01:55:55.749840 449514 out.go:176] * Verifying Kubernetes components...
I0325 01:55:55.747953 449514 addons.go:415] enableAddons start: toEnable=map[], additional=[]
I0325 01:55:55.747943 449514 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0325 01:55:55.748168 449514 config.go:176] Loaded profile config "kindnet-20220325014920-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.3
I0325 01:55:55.749922 449514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0325 01:55:55.749957 449514 addons.go:65] Setting storage-provisioner=true in profile "kindnet-20220325014920-262786"
I0325 01:55:55.749991 449514 addons.go:153] Setting addon storage-provisioner=true in "kindnet-20220325014920-262786"
W0325 01:55:55.750003 449514 addons.go:165] addon storage-provisioner should already be in state true
I0325 01:55:55.750037 449514 host.go:66] Checking if "kindnet-20220325014920-262786" exists ...
I0325 01:55:55.749959 449514 addons.go:65] Setting default-storageclass=true in profile "kindnet-20220325014920-262786"
I0325 01:55:55.750175 449514 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20220325014920-262786"
I0325 01:55:55.750481 449514 cli_runner.go:133] Run: docker container inspect kindnet-20220325014920-262786 --format={{.State.Status}}
I0325 01:55:55.750645 449514 cli_runner.go:133] Run: docker container inspect kindnet-20220325014920-262786 --format={{.State.Status}}
I0325 01:55:55.764770 449514 node_ready.go:35] waiting up to 5m0s for node "kindnet-20220325014920-262786" to be "Ready" ...
I0325 01:55:55.796467 449514 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0325 01:55:55.796598 449514 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0325 01:55:55.796603 449514 addons.go:153] Setting addon default-storageclass=true in "kindnet-20220325014920-262786"
W0325 01:55:55.796627 449514 addons.go:165] addon default-storageclass should already be in state true
I0325 01:55:55.796658 449514 host.go:66] Checking if "kindnet-20220325014920-262786" exists ...
I0325 01:55:55.796616 449514 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0325 01:55:55.796748 449514 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220325014920-262786
I0325 01:55:55.797084 449514 cli_runner.go:133] Run: docker container inspect kindnet-20220325014920-262786 --format={{.State.Status}}
I0325 01:55:55.823354 449514 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.58.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0325 01:55:55.838360 449514 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
I0325 01:55:55.838391 449514 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0325 01:55:55.838451 449514 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220325014920-262786
I0325 01:55:55.841405 449514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49534 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220325014920-262786/id_rsa Username:docker}
I0325 01:55:55.871252 449514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49534 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220325014920-262786/id_rsa Username:docker}
I0325 01:55:56.041026 449514 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0325 01:55:56.089799 449514 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0325 01:55:56.108798 449514 start.go:777] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
I0325 01:55:51.885844 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:54.919097 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:54.129214 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:56.129479 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:54.450376 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:56.949706 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:56.330095 449514 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
I0325 01:55:56.330118 449514 addons.go:417] enableAddons completed in 582.178132ms
I0325 01:55:57.771713 449514 node_ready.go:58] node "kindnet-20220325014920-262786" has status "Ready":"False"
I0325 01:55:59.772257 449514 node_ready.go:58] node "kindnet-20220325014920-262786" has status "Ready":"False"
I0325 01:55:57.951168 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:00.986964 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:55:58.627187 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:00.627532 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:55:59.450129 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:01.450808 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:02.272163 449514 node_ready.go:58] node "kindnet-20220325014920-262786" has status "Ready":"False"
I0325 01:56:04.272264 449514 node_ready.go:58] node "kindnet-20220325014920-262786" has status "Ready":"False"
I0325 01:56:04.022375 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:02.627738 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:05.126638 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:07.126933 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:03.950079 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:06.450402 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:08.450482 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:06.771583 449514 node_ready.go:58] node "kindnet-20220325014920-262786" has status "Ready":"False"
I0325 01:56:07.771772 449514 node_ready.go:49] node "kindnet-20220325014920-262786" has status "Ready":"True"
I0325 01:56:07.771799 449514 node_ready.go:38] duration metric: took 12.006998068s waiting for node "kindnet-20220325014920-262786" to be "Ready" ...
I0325 01:56:07.771807 449514 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0325 01:56:07.778025 449514 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-z9hnb" in "kube-system" namespace to be "Ready" ...
I0325 01:56:09.786294 449514 pod_ready.go:102] pod "coredns-64897985d-z9hnb" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:07.054544 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:10.088923 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:10.786828 449514 pod_ready.go:92] pod "coredns-64897985d-z9hnb" in "kube-system" namespace has status "Ready":"True"
I0325 01:56:10.786856 449514 pod_ready.go:81] duration metric: took 3.008800727s waiting for pod "coredns-64897985d-z9hnb" in "kube-system" namespace to be "Ready" ...
I0325 01:56:10.786866 449514 pod_ready.go:78] waiting up to 5m0s for pod "etcd-kindnet-20220325014920-262786" in "kube-system" namespace to be "Ready" ...
I0325 01:56:10.790887 449514 pod_ready.go:92] pod "etcd-kindnet-20220325014920-262786" in "kube-system" namespace has status "Ready":"True"
I0325 01:56:10.790902 449514 pod_ready.go:81] duration metric: took 4.031015ms waiting for pod "etcd-kindnet-20220325014920-262786" in "kube-system" namespace to be "Ready" ...
I0325 01:56:10.790914 449514 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-kindnet-20220325014920-262786" in "kube-system" namespace to be "Ready" ...
I0325 01:56:10.794991 449514 pod_ready.go:92] pod "kube-apiserver-kindnet-20220325014920-262786" in "kube-system" namespace has status "Ready":"True"
I0325 01:56:10.795010 449514 pod_ready.go:81] duration metric: took 4.089112ms waiting for pod "kube-apiserver-kindnet-20220325014920-262786" in "kube-system" namespace to be "Ready" ...
I0325 01:56:10.795019 449514 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-kindnet-20220325014920-262786" in "kube-system" namespace to be "Ready" ...
I0325 01:56:10.798928 449514 pod_ready.go:92] pod "kube-controller-manager-kindnet-20220325014920-262786" in "kube-system" namespace has status "Ready":"True"
I0325 01:56:10.798944 449514 pod_ready.go:81] duration metric: took 3.918878ms waiting for pod "kube-controller-manager-kindnet-20220325014920-262786" in "kube-system" namespace to be "Ready" ...
I0325 01:56:10.798983 449514 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-td8lj" in "kube-system" namespace to be "Ready" ...
I0325 01:56:10.802733 449514 pod_ready.go:92] pod "kube-proxy-td8lj" in "kube-system" namespace has status "Ready":"True"
I0325 01:56:10.802765 449514 pod_ready.go:81] duration metric: took 3.776283ms waiting for pod "kube-proxy-td8lj" in "kube-system" namespace to be "Ready" ...
I0325 01:56:10.802772 449514 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-kindnet-20220325014920-262786" in "kube-system" namespace to be "Ready" ...
I0325 01:56:11.183881 449514 pod_ready.go:92] pod "kube-scheduler-kindnet-20220325014920-262786" in "kube-system" namespace has status "Ready":"True"
I0325 01:56:11.183905 449514 pod_ready.go:81] duration metric: took 381.113148ms waiting for pod "kube-scheduler-kindnet-20220325014920-262786" in "kube-system" namespace to be "Ready" ...
I0325 01:56:11.183918 449514 pod_ready.go:38] duration metric: took 3.412101149s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0325 01:56:11.183943 449514 api_server.go:51] waiting for apiserver process to appear ...
I0325 01:56:11.184009 449514 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0325 01:56:11.193218 449514 api_server.go:71] duration metric: took 15.44529936s to wait for apiserver process to appear ...
I0325 01:56:11.193243 449514 api_server.go:87] waiting for apiserver healthz status ...
I0325 01:56:11.193254 449514 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
I0325 01:56:11.197515 449514 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
ok
I0325 01:56:11.198294 449514 api_server.go:140] control plane version: v1.23.3
I0325 01:56:11.198319 449514 api_server.go:130] duration metric: took 5.06978ms to wait for apiserver health ...
I0325 01:56:11.198329 449514 system_pods.go:43] waiting for kube-system pods to appear ...
I0325 01:56:11.387118 449514 system_pods.go:59] 8 kube-system pods found
I0325 01:56:11.387159 449514 system_pods.go:61] "coredns-64897985d-z9hnb" [5c577c70-7ba0-42f7-84cc-29706381a927] Running
I0325 01:56:11.387167 449514 system_pods.go:61] "etcd-kindnet-20220325014920-262786" [89463790-47bc-4b54-bfe0-764eff89c367] Running
I0325 01:56:11.387173 449514 system_pods.go:61] "kindnet-sqq6l" [f4681712-732f-4c97-a171-96743c9634a6] Running
I0325 01:56:11.387180 449514 system_pods.go:61] "kube-apiserver-kindnet-20220325014920-262786" [838f24ab-2d9c-4d11-b4e5-5f32f133c6f7] Running
I0325 01:56:11.387186 449514 system_pods.go:61] "kube-controller-manager-kindnet-20220325014920-262786" [68d99255-d9ca-4e07-bdb8-6e8d650d33c0] Running
I0325 01:56:11.387192 449514 system_pods.go:61] "kube-proxy-td8lj" [47ac9435-9af3-4083-b483-959467fae74b] Running
I0325 01:56:11.387199 449514 system_pods.go:61] "kube-scheduler-kindnet-20220325014920-262786" [ca1490d3-de38-48f7-94e1-06d6e9631bec] Running
I0325 01:56:11.387210 449514 system_pods.go:61] "storage-provisioner" [42e3fbb5-5d56-42d0-bced-81ef5bdabd94] Running
I0325 01:56:11.387219 449514 system_pods.go:74] duration metric: took 188.884495ms to wait for pod list to return data ...
I0325 01:56:11.387231 449514 default_sa.go:34] waiting for default service account to be created ...
I0325 01:56:11.584178 449514 default_sa.go:45] found service account: "default"
I0325 01:56:11.584205 449514 default_sa.go:55] duration metric: took 196.964681ms for default service account to be created ...
I0325 01:56:11.584213 449514 system_pods.go:116] waiting for k8s-apps to be running ...
I0325 01:56:11.786817 449514 system_pods.go:86] 8 kube-system pods found
I0325 01:56:11.786845 449514 system_pods.go:89] "coredns-64897985d-z9hnb" [5c577c70-7ba0-42f7-84cc-29706381a927] Running
I0325 01:56:11.786850 449514 system_pods.go:89] "etcd-kindnet-20220325014920-262786" [89463790-47bc-4b54-bfe0-764eff89c367] Running
I0325 01:56:11.786855 449514 system_pods.go:89] "kindnet-sqq6l" [f4681712-732f-4c97-a171-96743c9634a6] Running
I0325 01:56:11.786859 449514 system_pods.go:89] "kube-apiserver-kindnet-20220325014920-262786" [838f24ab-2d9c-4d11-b4e5-5f32f133c6f7] Running
I0325 01:56:11.786864 449514 system_pods.go:89] "kube-controller-manager-kindnet-20220325014920-262786" [68d99255-d9ca-4e07-bdb8-6e8d650d33c0] Running
I0325 01:56:11.786868 449514 system_pods.go:89] "kube-proxy-td8lj" [47ac9435-9af3-4083-b483-959467fae74b] Running
I0325 01:56:11.786872 449514 system_pods.go:89] "kube-scheduler-kindnet-20220325014920-262786" [ca1490d3-de38-48f7-94e1-06d6e9631bec] Running
I0325 01:56:11.786875 449514 system_pods.go:89] "storage-provisioner" [42e3fbb5-5d56-42d0-bced-81ef5bdabd94] Running
I0325 01:56:11.786880 449514 system_pods.go:126] duration metric: took 202.662306ms to wait for k8s-apps to be running ...
I0325 01:56:11.786887 449514 system_svc.go:44] waiting for kubelet service to be running ....
I0325 01:56:11.786926 449514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0325 01:56:11.796444 449514 system_svc.go:56] duration metric: took 9.549447ms WaitForService to wait for kubelet.
I0325 01:56:11.796465 449514 kubeadm.go:548] duration metric: took 16.048553309s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0325 01:56:11.796487 449514 node_conditions.go:102] verifying NodePressure condition ...
I0325 01:56:11.985209 449514 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
I0325 01:56:11.985238 449514 node_conditions.go:123] node cpu capacity is 8
I0325 01:56:11.985255 449514 node_conditions.go:105] duration metric: took 188.763227ms to run NodePressure ...
I0325 01:56:11.985267 449514 start.go:213] waiting for startup goroutines ...
I0325 01:56:12.021944 449514 start.go:499] kubectl: 1.23.5, cluster: 1.23.3 (minor skew: 0)
I0325 01:56:12.024417 449514 out.go:176] * Done! kubectl is now configured to use "kindnet-20220325014920-262786" cluster and "default" namespace by default
I0325 01:56:09.127472 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:11.627092 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:10.950537 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:13.450494 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:13.121694 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:16.158628 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:13.627515 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:16.127205 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:15.950800 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:18.450140 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:19.193066 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:18.128126 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:20.128197 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:20.950489 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:23.450799 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:22.229496 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:25.263135 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:22.627521 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:24.628218 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:27.128007 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:25.950728 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:28.450548 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:28.299080 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:31.335036 431164 stop.go:59] stop err: Maximum number of retries (60) exceeded
I0325 01:56:31.335082 431164 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
I0325 01:56:31.335570 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
W0325 01:56:31.369049 431164 delete.go:135] deletehost failed: Docker machine "old-k8s-version-20220325015306-262786" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
I0325 01:56:31.369136 431164 cli_runner.go:133] Run: docker container inspect -f {{.Id}} old-k8s-version-20220325015306-262786
I0325 01:56:31.404692 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:31.436643 431164 cli_runner.go:133] Run: docker exec --privileged -t old-k8s-version-20220325015306-262786 /bin/bash -c "sudo init 0"
W0325 01:56:31.469236 431164 cli_runner.go:180] docker exec --privileged -t old-k8s-version-20220325015306-262786 /bin/bash -c "sudo init 0" returned with exit code 1
I0325 01:56:31.469271 431164 oci.go:659] error shutdown old-k8s-version-20220325015306-262786: docker exec --privileged -t old-k8s-version-20220325015306-262786 /bin/bash -c "sudo init 0": exit status 1
stdout:
stderr:
Error response from daemon: Container 70db97c1e507dd38002925bf640879383cbadb553804ce2496e418013a3ab218 is not running
I0325 01:56:29.626998 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:32.127383 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:32.470272 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:32.503561 431164 oci.go:673] temporary error: container old-k8s-version-20220325015306-262786 status is but expect it to be exited
I0325 01:56:32.503590 431164 oci.go:679] Successfully shutdown container old-k8s-version-20220325015306-262786
I0325 01:56:32.503641 431164 cli_runner.go:133] Run: docker rm -f -v old-k8s-version-20220325015306-262786
I0325 01:56:32.540810 431164 cli_runner.go:133] Run: docker container inspect -f {{.Id}} old-k8s-version-20220325015306-262786
W0325 01:56:32.570903 431164 cli_runner.go:180] docker container inspect -f {{.Id}} old-k8s-version-20220325015306-262786 returned with exit code 1
I0325 01:56:32.571005 431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0325 01:56:32.601633 431164 cli_runner.go:180] docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0325 01:56:32.601695 431164 network_create.go:254] running [docker network inspect old-k8s-version-20220325015306-262786] to gather additional debugging logs...
I0325 01:56:32.601719 431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786
W0325 01:56:32.632633 431164 cli_runner.go:180] docker network inspect old-k8s-version-20220325015306-262786 returned with exit code 1
I0325 01:56:32.632663 431164 network_create.go:257] error running [docker network inspect old-k8s-version-20220325015306-262786]: docker network inspect old-k8s-version-20220325015306-262786: exit status 1
stdout:
[]
stderr:
Error: No such network: old-k8s-version-20220325015306-262786
I0325 01:56:32.632678 431164 network_create.go:259] output of [docker network inspect old-k8s-version-20220325015306-262786]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: old-k8s-version-20220325015306-262786
** /stderr **
W0325 01:56:32.632818 431164 delete.go:139] delete failed (probably ok) <nil>
I0325 01:56:32.632831 431164 fix.go:120] Sleeping 1 second for extra luck!
I0325 01:56:33.633777 431164 start.go:127] createHost starting for "" (driver="docker")
I0325 01:56:30.950428 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:33.449469 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:33.636953 431164 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0325 01:56:33.637111 431164 start.go:161] libmachine.API.Create for "old-k8s-version-20220325015306-262786" (driver="docker")
I0325 01:56:33.637158 431164 client.go:168] LocalClient.Create starting
I0325 01:56:33.637270 431164 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
I0325 01:56:33.637315 431164 main.go:130] libmachine: Decoding PEM data...
I0325 01:56:33.637341 431164 main.go:130] libmachine: Parsing certificate...
I0325 01:56:33.637420 431164 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
I0325 01:56:33.637448 431164 main.go:130] libmachine: Decoding PEM data...
I0325 01:56:33.637471 431164 main.go:130] libmachine: Parsing certificate...
I0325 01:56:33.637805 431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0325 01:56:33.670584 431164 cli_runner.go:180] docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0325 01:56:33.670681 431164 network_create.go:254] running [docker network inspect old-k8s-version-20220325015306-262786] to gather additional debugging logs...
I0325 01:56:33.670699 431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786
W0325 01:56:33.700818 431164 cli_runner.go:180] docker network inspect old-k8s-version-20220325015306-262786 returned with exit code 1
I0325 01:56:33.700851 431164 network_create.go:257] error running [docker network inspect old-k8s-version-20220325015306-262786]: docker network inspect old-k8s-version-20220325015306-262786: exit status 1
stdout:
[]
stderr:
Error: No such network: old-k8s-version-20220325015306-262786
I0325 01:56:33.700871 431164 network_create.go:259] output of [docker network inspect old-k8s-version-20220325015306-262786]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: old-k8s-version-20220325015306-262786
** /stderr **
I0325 01:56:33.700917 431164 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0325 01:56:33.731365 431164 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-fcb21d43dbbf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:db:45:ae:c5}}
I0325 01:56:33.732243 431164 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-a040cc4bab62 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:d0:f2:08:b6}}
I0325 01:56:33.733015 431164 network.go:240] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-12bda0d2312e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:de:32:64:a8}}
I0325 01:56:33.733812 431164 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc00013e8e8 192.168.76.0:0xc000702388] misses:0}
I0325 01:56:33.733853 431164 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0325 01:56:33.733877 431164 network_create.go:106] attempt to create docker network old-k8s-version-20220325015306-262786 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I0325 01:56:33.733929 431164 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220325015306-262786
I0325 01:56:33.801121 431164 network_create.go:90] docker network old-k8s-version-20220325015306-262786 192.168.76.0/24 created
I0325 01:56:33.801156 431164 kic.go:106] calculated static IP "192.168.76.2" for the "old-k8s-version-20220325015306-262786" container
I0325 01:56:33.801207 431164 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
I0325 01:56:33.833969 431164 cli_runner.go:133] Run: docker volume create old-k8s-version-20220325015306-262786 --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --label created_by.minikube.sigs.k8s.io=true
I0325 01:56:33.863735 431164 oci.go:102] Successfully created a docker volume old-k8s-version-20220325015306-262786
I0325 01:56:33.863800 431164 cli_runner.go:133] Run: docker run --rm --name old-k8s-version-20220325015306-262786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --entrypoint /usr/bin/test -v old-k8s-version-20220325015306-262786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
I0325 01:56:34.361286 431164 oci.go:106] Successfully prepared a docker volume old-k8s-version-20220325015306-262786
I0325 01:56:34.361350 431164 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
I0325 01:56:34.361371 431164 kic.go:179] Starting extracting preloaded images to volume ...
I0325 01:56:34.361435 431164 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220325015306-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
I0325 01:56:34.128040 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:36.627385 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:35.450252 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:37.949875 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:39.128737 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:41.626936 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:39.950734 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:42.451036 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:43.174328 431164 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220325015306-262786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (8.812845537s)
I0325 01:56:43.174371 431164 kic.go:188] duration metric: took 8.812995 seconds to extract preloaded images to volume
W0325 01:56:43.174413 431164 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0325 01:56:43.174420 431164 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0325 01:56:43.174472 431164 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
I0325 01:56:43.265519 431164 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220325015306-262786 --name old-k8s-version-20220325015306-262786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220325015306-262786 --network old-k8s-version-20220325015306-262786 --ip 192.168.76.2 --volume old-k8s-version-20220325015306-262786:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
I0325 01:56:43.664728 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Running}}
I0325 01:56:43.700561 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:43.732786 431164 cli_runner.go:133] Run: docker exec old-k8s-version-20220325015306-262786 stat /var/lib/dpkg/alternatives/iptables
I0325 01:56:43.800760 431164 oci.go:281] the created container "old-k8s-version-20220325015306-262786" has a running status.
I0325 01:56:43.800796 431164 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa...
I0325 01:56:43.897798 431164 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0325 01:56:44.005992 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:44.040565 431164 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0325 01:56:44.040590 431164 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220325015306-262786 chown docker:docker /home/docker/.ssh/authorized_keys]
I0325 01:56:44.141276 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:56:44.181329 431164 machine.go:88] provisioning docker machine ...
I0325 01:56:44.181386 431164 ubuntu.go:169] provisioning hostname "old-k8s-version-20220325015306-262786"
I0325 01:56:44.181456 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:44.218999 431164 main.go:130] libmachine: Using SSH client type: native
I0325 01:56:44.219333 431164 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil> [] 0s} 127.0.0.1 49539 <nil> <nil>}
I0325 01:56:44.219364 431164 main.go:130] libmachine: About to run SSH command:
sudo hostname old-k8s-version-20220325015306-262786 && echo "old-k8s-version-20220325015306-262786" | sudo tee /etc/hostname
I0325 01:56:44.346895 431164 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220325015306-262786
I0325 01:56:44.347002 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:44.378982 431164 main.go:130] libmachine: Using SSH client type: native
I0325 01:56:44.379158 431164 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil> [] 0s} 127.0.0.1 49539 <nil> <nil>}
I0325 01:56:44.379177 431164 main.go:130] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-20220325015306-262786' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220325015306-262786/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-20220325015306-262786' | sudo tee -a /etc/hosts;
fi
fi
I0325 01:56:44.499114 431164 main.go:130] libmachine: SSH cmd err, output: <nil>:
I0325 01:56:44.499153 431164 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
I0325 01:56:44.499174 431164 ubuntu.go:177] setting up certificates
I0325 01:56:44.499184 431164 provision.go:83] configureAuth start
I0325 01:56:44.499239 431164 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220325015306-262786
I0325 01:56:44.532553 431164 provision.go:138] copyHostCerts
I0325 01:56:44.532637 431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
I0325 01:56:44.532651 431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
I0325 01:56:44.532750 431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
I0325 01:56:44.532836 431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
I0325 01:56:44.532855 431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
I0325 01:56:44.532882 431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
I0325 01:56:44.532930 431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
I0325 01:56:44.532938 431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
I0325 01:56:44.532957 431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
I0325 01:56:44.532998 431164 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220325015306-262786 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220325015306-262786]
I0325 01:56:44.716034 431164 provision.go:172] copyRemoteCerts
I0325 01:56:44.716095 431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0325 01:56:44.716131 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:44.750262 431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
I0325 01:56:44.842652 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
I0325 01:56:44.860534 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0325 01:56:44.877456 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0325 01:56:44.894710 431164 provision.go:86] duration metric: configureAuth took 395.50834ms
I0325 01:56:44.894744 431164 ubuntu.go:193] setting minikube options for container-runtime
I0325 01:56:44.894925 431164 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
I0325 01:56:44.894941 431164 machine.go:91] provisioned docker machine in 713.577559ms
I0325 01:56:44.894947 431164 client.go:171] LocalClient.Create took 11.257778857s
I0325 01:56:44.894990 431164 start.go:169] duration metric: libmachine.API.Create for "old-k8s-version-20220325015306-262786" took 11.257879213s
I0325 01:56:44.895011 431164 start.go:302] post-start starting for "old-k8s-version-20220325015306-262786" (driver="docker")
I0325 01:56:44.895022 431164 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0325 01:56:44.895080 431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0325 01:56:44.895130 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:44.927429 431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
I0325 01:56:45.014679 431164 ssh_runner.go:195] Run: cat /etc/os-release
I0325 01:56:45.017487 431164 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0325 01:56:45.017516 431164 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0325 01:56:45.017525 431164 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0325 01:56:45.017530 431164 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0325 01:56:45.017538 431164 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
I0325 01:56:45.017604 431164 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
I0325 01:56:45.017669 431164 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
I0325 01:56:45.017744 431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0325 01:56:45.024070 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
I0325 01:56:45.041483 431164 start.go:305] post-start completed in 146.454729ms
I0325 01:56:45.041798 431164 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220325015306-262786
I0325 01:56:45.076182 431164 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/config.json ...
I0325 01:56:45.076420 431164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0325 01:56:45.076458 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:45.108209 431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
I0325 01:56:45.195204 431164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0325 01:56:45.198866 431164 start.go:130] duration metric: createHost completed in 11.565060546s
I0325 01:56:45.198964 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
W0325 01:56:45.231974 431164 fix.go:134] unexpected machine state, will restart: <nil>
I0325 01:56:45.232009 431164 machine.go:88] provisioning docker machine ...
I0325 01:56:45.232033 431164 ubuntu.go:169] provisioning hostname "old-k8s-version-20220325015306-262786"
I0325 01:56:45.232086 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:45.262455 431164 main.go:130] libmachine: Using SSH client type: native
I0325 01:56:45.262621 431164 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil> [] 0s} 127.0.0.1 49539 <nil> <nil>}
I0325 01:56:45.262636 431164 main.go:130] libmachine: About to run SSH command:
sudo hostname old-k8s-version-20220325015306-262786 && echo "old-k8s-version-20220325015306-262786" | sudo tee /etc/hostname
I0325 01:56:45.386554 431164 main.go:130] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220325015306-262786
I0325 01:56:45.386637 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:45.419901 431164 main.go:130] libmachine: Using SSH client type: native
I0325 01:56:45.420066 431164 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7abfa0] 0x7af080 <nil> [] 0s} 127.0.0.1 49539 <nil> <nil>}
I0325 01:56:45.420098 431164 main.go:130] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-20220325015306-262786' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220325015306-262786/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-20220325015306-262786' | sudo tee -a /etc/hosts;
fi
fi
I0325 01:56:45.542421 431164 main.go:130] libmachine: SSH cmd err, output: <nil>:
I0325 01:56:45.542450 431164 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558
/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
I0325 01:56:45.542464 431164 ubuntu.go:177] setting up certificates
I0325 01:56:45.542474 431164 provision.go:83] configureAuth start
I0325 01:56:45.542517 431164 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220325015306-262786
I0325 01:56:45.575074 431164 provision.go:138] copyHostCerts
I0325 01:56:45.575139 431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
I0325 01:56:45.575151 431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
I0325 01:56:45.575209 431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
I0325 01:56:45.575301 431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
I0325 01:56:45.575311 431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
I0325 01:56:45.575333 431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
I0325 01:56:45.575380 431164 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
I0325 01:56:45.575388 431164 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
I0325 01:56:45.575407 431164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
I0325 01:56:45.575453 431164 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220325015306-262786 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220325015306-262786]
I0325 01:56:45.699927 431164 provision.go:172] copyRemoteCerts
I0325 01:56:45.699978 431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0325 01:56:45.700008 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:45.732608 431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
I0325 01:56:46.059471 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0325 01:56:46.077602 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
I0325 01:56:46.094741 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0325 01:56:46.111752 431164 provision.go:86] duration metric: configureAuth took 569.266891ms
I0325 01:56:46.111780 431164 ubuntu.go:193] setting minikube options for container-runtime
I0325 01:56:46.111953 431164 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
I0325 01:56:46.111967 431164 machine.go:91] provisioned docker machine in 879.950952ms
I0325 01:56:46.111977 431164 start.go:302] post-start starting for "old-k8s-version-20220325015306-262786" (driver="docker")
I0325 01:56:46.111985 431164 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0325 01:56:46.112037 431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0325 01:56:46.112083 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:46.146009 431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
I0325 01:56:46.238610 431164 ssh_runner.go:195] Run: cat /etc/os-release
I0325 01:56:46.241357 431164 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0325 01:56:46.241383 431164 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0325 01:56:46.241391 431164 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0325 01:56:46.241399 431164 info.go:137] Remote host: Ubuntu 20.04.4 LTS
I0325 01:56:46.241413 431164 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
I0325 01:56:46.241465 431164 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
I0325 01:56:46.241560 431164 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem -> 2627862.pem in /etc/ssl/certs
I0325 01:56:46.241650 431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0325 01:56:46.248459 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /etc/ssl/certs/2627862.pem (1708 bytes)
I0325 01:56:46.265464 431164 start.go:305] post-start completed in 153.469791ms
I0325 01:56:46.265532 431164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0325 01:56:46.265573 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:46.297032 431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
I0325 01:56:46.382984 431164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0325 01:56:46.387252 431164 fix.go:57] fixHost completed within 3m17.71088257s
I0325 01:56:46.387290 431164 start.go:81] releasing machines lock for "old-k8s-version-20220325015306-262786", held for 3m17.710952005s
I0325 01:56:46.387387 431164 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220325015306-262786
I0325 01:56:46.430623 431164 ssh_runner.go:195] Run: sudo service crio stop
I0325 01:56:46.430668 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:46.430668 431164 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0325 01:56:46.430720 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:56:46.467539 431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
I0325 01:56:46.469867 431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
I0325 01:56:43.627468 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:46.128274 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:44.950967 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:47.450049 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:46.901923 431164 openrc.go:165] stop output:
I0325 01:56:46.901990 431164 ssh_runner.go:195] Run: sudo service crio status
I0325 01:56:46.918929 431164 docker.go:183] disabling docker service ...
I0325 01:56:46.918994 431164 ssh_runner.go:195] Run: sudo service docker.socket stop
I0325 01:56:47.285757 431164 openrc.go:165] stop output:
** stderr **
Failed to stop docker.socket.service: Unit docker.socket.service not loaded.
** /stderr **
E0325 01:56:47.285792 431164 docker.go:186] "Failed to stop" err=<
sudo service docker.socket stop: Process exited with status 5
stdout:
stderr:
Failed to stop docker.socket.service: Unit docker.socket.service not loaded.
> service="docker.socket"
I0325 01:56:47.285838 431164 ssh_runner.go:195] Run: sudo service docker.service stop
I0325 01:56:47.649755 431164 openrc.go:165] stop output:
** stderr **
Failed to stop docker.service.service: Unit docker.service.service not loaded.
** /stderr **
E0325 01:56:47.649784 431164 docker.go:189] "Failed to stop" err=<
sudo service docker.service stop: Process exited with status 5
stdout:
stderr:
Failed to stop docker.service.service: Unit docker.service.service not loaded.
> service="docker.service"
W0325 01:56:47.649796 431164 cruntime.go:283] disable failed: sudo service docker.service stop: Process exited with status 5
stdout:
stderr:
Failed to stop docker.service.service: Unit docker.service.service not loaded.
I0325 01:56:47.649838 431164 ssh_runner.go:195] Run: sudo service docker status
W0325 01:56:47.664778 431164 containerd.go:244] disableOthers: Docker is still active
I0325 01:56:47.664901 431164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0325 01:56:47.676728 431164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwcm94eV9wbHVnaW5zXQojIGZ1c2Utb3ZlcmxheWZzIGlzIHVzZWQgZm9yIHJvb3RsZXNzCltwcm94eV9wbHVnaW5zLiJmdXNlLW92ZXJsYXlmcyJdCiAgdHlwZSA9ICJzbmFwc2hvdCIKICBhZGRyZXNzID0gIi9ydW4vY29udGFpbmVyZC1mdXNlLW92ZXJsYXlmcy5zb2NrIgoKW3BsdWdpbnNdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQubW9uaXRvci52MS5jZ3JvdXBzIl0KICAgIG5vX3Byb21ldGhldXMgPSBmYWxzZQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0KICAgIHN0cmVhbV9zZXJ2ZXJfYWRkcmVzcyA9ICIiCiAgI
CBzdHJlYW1fc2VydmVyX3BvcnQgPSAiMTAwMTAiCiAgICBlbmFibGVfc2VsaW51eCA9IGZhbHNlCiAgICBzYW5kYm94X2ltYWdlID0gIms4cy5nY3IuaW8vcGF1c2U6My4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKICAgIHJlc3RyaWN0X29vbV9zY29yZV9hZGogPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICBkaXNjYXJkX3VucGFja2VkX2xheWVycyA9IHRydWUKICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZ
XMucnVuY10KICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdCiAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQubWsiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY
2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
I0325 01:56:47.689398 431164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0325 01:56:47.695491 431164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0325 01:56:47.701670 431164 ssh_runner.go:195] Run: sudo service containerd restart
I0325 01:56:47.775876 431164 openrc.go:152] restart output:
I0325 01:56:47.775911 431164 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
I0325 01:56:47.775957 431164 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0325 01:56:47.780036 431164 start.go:462] Will wait 60s for crictl version
I0325 01:56:47.780095 431164 ssh_runner.go:195] Run: sudo crictl version
I0325 01:56:47.808499 431164 retry.go:31] will retry after 8.009118606s: Temporary Error: sudo crictl version: Process exited with status 1
stdout:
stderr:
time="2022-03-25T01:56:47Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
I0325 01:56:48.627787 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:51.128134 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:49.450281 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:51.950064 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:55.819167 431164 ssh_runner.go:195] Run: sudo crictl version
I0325 01:56:55.842809 431164 start.go:471] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.5.10
RuntimeApiVersion: v1alpha2
I0325 01:56:55.842867 431164 ssh_runner.go:195] Run: containerd --version
I0325 01:56:55.862493 431164 ssh_runner.go:195] Run: containerd --version
I0325 01:56:55.885291 431164 out.go:176] * Preparing Kubernetes v1.16.0 on containerd 1.5.10 ...
I0325 01:56:55.885389 431164 cli_runner.go:133] Run: docker network inspect old-k8s-version-20220325015306-262786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0325 01:56:55.918381 431164 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0325 01:56:55.921728 431164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0325 01:56:55.933134 431164 out.go:176] - kubelet.cni-conf-dir=/etc/cni/net.mk
I0325 01:56:55.933231 431164 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
I0325 01:56:55.933303 431164 ssh_runner.go:195] Run: sudo crictl images --output json
I0325 01:56:55.955768 431164 containerd.go:612] all images are preloaded for containerd runtime.
I0325 01:56:55.955788 431164 containerd.go:526] Images already preloaded, skipping extraction
I0325 01:56:55.955828 431164 ssh_runner.go:195] Run: sudo crictl images --output json
I0325 01:56:55.979329 431164 containerd.go:612] all images are preloaded for containerd runtime.
I0325 01:56:55.979348 431164 cache_images.go:84] Images are preloaded, skipping loading
I0325 01:56:55.979386 431164 ssh_runner.go:195] Run: sudo crictl info
I0325 01:56:56.002748 431164 cni.go:93] Creating CNI manager for ""
I0325 01:56:56.002768 431164 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
I0325 01:56:56.002779 431164 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0325 01:56:56.002792 431164 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220325015306-262786 NodeName:old-k8s-version-20220325015306-262786 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgro
upfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0325 01:56:56.002974 431164 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-20220325015306-262786"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: old-k8s-version-20220325015306-262786
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
kubernetesVersion: v1.16.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0325 01:56:56.003083 431164 kubeadm.go:936] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-20220325015306-262786 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220325015306-262786 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0325 01:56:56.003141 431164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
I0325 01:56:56.009691 431164 binaries.go:44] Found k8s binaries, skipping transfer
I0325 01:56:56.009827 431164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /var/lib/minikube /etc/init.d
I0325 01:56:56.016464 431164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (582 bytes)
I0325 01:56:56.028607 431164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0325 01:56:56.041034 431164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
I0325 01:56:56.052949 431164 ssh_runner.go:362] scp memory --> /var/lib/minikube/openrc-restart-wrapper.sh (233 bytes)
I0325 01:56:56.064655 431164 ssh_runner.go:362] scp memory --> /etc/init.d/kubelet (839 bytes)
I0325 01:56:56.077971 431164 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0325 01:56:56.080686 431164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0325 01:56:56.089291 431164 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786 for IP: 192.168.76.2
I0325 01:56:56.089415 431164 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
I0325 01:56:56.089479 431164 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
I0325 01:56:56.089550 431164 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.key
I0325 01:56:56.089574 431164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.crt with IP's: []
I0325 01:56:56.173943 431164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.crt ...
I0325 01:56:56.173977 431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.crt: {Name:mk49efef0712da8d212d4d9821e0f44d60c00474 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:56:56.174212 431164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.key ...
I0325 01:56:56.174231 431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/client.key: {Name:mk717fd0b3391f00b7d69817a759d1a2ba6569e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:56:56.174386 431164 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key.31bdca25
I0325 01:56:56.174407 431164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0325 01:56:56.553488 431164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt.31bdca25 ...
I0325 01:56:56.553520 431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt.31bdca25: {Name:mk0db50f453f850e6693f5f3251d591297fe24c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:56:56.553723 431164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key.31bdca25 ...
I0325 01:56:56.553738 431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key.31bdca25: {Name:mk44b3f12e50b4c043237e17ee319a130c7e6799 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:56:56.553849 431164 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt
I0325 01:56:56.553904 431164 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key
I0325 01:56:56.553946 431164 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.key
I0325 01:56:56.553962 431164 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.crt with IP's: []
I0325 01:56:56.634118 431164 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.crt ...
I0325 01:56:56.634144 431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.crt: {Name:mk41a988659c1306ddd1bb6feb42c4fcbdf737c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:56:56.634328 431164 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.key ...
I0325 01:56:56.634387 431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.key: {Name:mk496346cb1866d19fd00f75f3dc225361dc4fcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:56:56.634593 431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem (1338 bytes)
W0325 01:56:56.634634 431164 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786_empty.pem, impossibly tiny 0 bytes
I0325 01:56:56.634643 431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1675 bytes)
I0325 01:56:56.634663 431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
I0325 01:56:56.634688 431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
I0325 01:56:56.634714 431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
I0325 01:56:56.634755 431164 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem (1708 bytes)
I0325 01:56:56.635301 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0325 01:56:56.653204 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0325 01:56:56.669615 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0325 01:56:56.686091 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220325015306-262786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0325 01:56:56.702278 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0325 01:56:56.718732 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0325 01:56:56.734704 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0325 01:56:56.751950 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0325 01:56:56.768370 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/2627862.pem --> /usr/share/ca-certificates/2627862.pem (1708 bytes)
I0325 01:56:56.785599 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0325 01:56:56.802704 431164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/262786.pem --> /usr/share/ca-certificates/262786.pem (1338 bytes)
I0325 01:56:56.818636 431164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0325 01:56:56.830434 431164 ssh_runner.go:195] Run: openssl version
I0325 01:56:56.834834 431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0325 01:56:56.841688 431164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0325 01:56:56.844759 431164 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 25 01:18 /usr/share/ca-certificates/minikubeCA.pem
I0325 01:56:56.844799 431164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0325 01:56:56.849420 431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0325 01:56:56.856216 431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/262786.pem && ln -fs /usr/share/ca-certificates/262786.pem /etc/ssl/certs/262786.pem"
I0325 01:56:56.863401 431164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/262786.pem
I0325 01:56:56.866302 431164 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 25 01:23 /usr/share/ca-certificates/262786.pem
I0325 01:56:56.866341 431164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/262786.pem
I0325 01:56:56.871090 431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/262786.pem /etc/ssl/certs/51391683.0"
I0325 01:56:56.878141 431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2627862.pem && ln -fs /usr/share/ca-certificates/2627862.pem /etc/ssl/certs/2627862.pem"
I0325 01:56:56.885043 431164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2627862.pem
I0325 01:56:56.887974 431164 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 25 01:23 /usr/share/ca-certificates/2627862.pem
I0325 01:56:56.888019 431164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2627862.pem
I0325 01:56:56.892629 431164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2627862.pem /etc/ssl/certs/3ec20f2e.0"
I0325 01:56:56.899573 431164 kubeadm.go:391] StartCluster: {Name:old-k8s-version-20220325015306-262786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220325015306-262786 Namespace:default APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0325 01:56:56.899669 431164 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0325 01:56:56.899700 431164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0325 01:56:56.924510 431164 cri.go:87] found id: ""
I0325 01:56:56.924564 431164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0325 01:56:56.967274 431164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0325 01:56:56.974042 431164 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I0325 01:56:56.974100 431164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0325 01:56:56.980509 431164 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0325 01:56:56.980549 431164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0325 01:56:53.628216 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:56.127805 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:54.450144 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:56.450569 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:58.450825 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:56:57.342628 431164 out.go:203] - Generating certificates and keys ...
I0325 01:57:00.421358 431164 out.go:203] - Booting up control plane ...
I0325 01:56:58.128581 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:00.627978 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:00.950520 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:03.450640 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:03.128282 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:05.627062 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:05.450918 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:07.950107 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:10.462463 431164 out.go:203] - Configuring RBAC rules ...
I0325 01:57:10.884078 431164 cni.go:93] Creating CNI manager for ""
I0325 01:57:10.884101 431164 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
I0325 01:57:10.885886 431164 out.go:176] * Configuring CNI (Container Networking Interface) ...
I0325 01:57:10.885957 431164 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0325 01:57:10.889349 431164 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
I0325 01:57:10.889369 431164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0325 01:57:10.902215 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0325 01:57:11.219931 431164 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0325 01:57:11.220013 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:11.220072 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95 minikube.k8s.io/name=old-k8s-version-20220325015306-262786 minikube.k8s.io/updated_at=2022_03_25T01_57_11_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:11.227208 431164 ops.go:34] apiserver oom_adj: -16
I0325 01:57:11.318580 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:07.627148 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:09.627800 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:11.628985 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:09.950750 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:11.951314 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:11.897565 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:12.397150 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:12.897044 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:13.397714 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:13.897135 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:14.396784 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:14.897509 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:15.397532 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:15.897241 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:16.397418 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:14.127368 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:16.128349 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:14.450849 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:16.451359 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:16.897298 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:17.397490 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:17.896851 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:18.396958 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:18.897528 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:19.397449 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:19.896818 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:20.396950 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:20.897730 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:21.397699 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:18.627452 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:21.127596 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:18.950702 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:20.950861 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:22.951062 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:21.897770 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:22.397129 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:22.897777 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:23.396809 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:23.897374 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:24.396808 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:24.897374 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:25.397510 431164 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0325 01:57:25.465074 431164 kubeadm.go:1020] duration metric: took 14.245126743s to wait for elevateKubeSystemPrivileges.
I0325 01:57:25.465105 431164 kubeadm.go:393] StartCluster complete in 28.565542464s
I0325 01:57:25.465127 431164 settings.go:142] acquiring lock: {Name:mkd9207a71140e597ee38b8fd6262dcfd9122927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:57:25.465222 431164 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
I0325 01:57:25.466826 431164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mka4757d6a6d95677654eb963585bc89154cfe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0325 01:57:25.982566 431164 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20220325015306-262786" rescaled to 1
I0325 01:57:25.982642 431164 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0325 01:57:25.985735 431164 out.go:176] * Verifying Kubernetes components...
I0325 01:57:25.982729 431164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0325 01:57:25.985818 431164 ssh_runner.go:195] Run: sudo service kubelet status
I0325 01:57:25.982734 431164 addons.go:415] enableAddons start: toEnable=map[], additional=[]
I0325 01:57:25.982930 431164 config.go:176] Loaded profile config "old-k8s-version-20220325015306-262786": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
I0325 01:57:25.985917 431164 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20220325015306-262786"
I0325 01:57:25.985938 431164 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20220325015306-262786"
W0325 01:57:25.985944 431164 addons.go:165] addon storage-provisioner should already be in state true
I0325 01:57:25.985974 431164 host.go:66] Checking if "old-k8s-version-20220325015306-262786" exists ...
I0325 01:57:25.987026 431164 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20220325015306-262786"
I0325 01:57:25.987059 431164 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20220325015306-262786"
I0325 01:57:25.987464 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:57:25.987734 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:57:26.043330 431164 out.go:176] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0325 01:57:26.041809 431164 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20220325015306-262786"
W0325 01:57:26.043448 431164 addons.go:165] addon default-storageclass should already be in state true
I0325 01:57:26.043461 431164 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0325 01:57:26.043473 431164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0325 01:57:26.043499 431164 host.go:66] Checking if "old-k8s-version-20220325015306-262786" exists ...
I0325 01:57:26.043528 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:57:26.043990 431164 cli_runner.go:133] Run: docker container inspect old-k8s-version-20220325015306-262786 --format={{.State.Status}}
I0325 01:57:26.079480 431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
I0325 01:57:26.080003 431164 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
I0325 01:57:26.080025 431164 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0325 01:57:26.080072 431164 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220325015306-262786
I0325 01:57:26.123901 431164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49539 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-259449-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220325015306-262786/id_rsa Username:docker}
I0325 01:57:26.130675 431164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.76.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0325 01:57:26.132207 431164 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20220325015306-262786" to be "Ready" ...
I0325 01:57:26.203910 431164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0325 01:57:26.305985 431164 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0325 01:57:26.701311 431164 start.go:777] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS
I0325 01:57:23.627677 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:25.629078 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:25.451476 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:27.950005 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:26.884863 431164 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
I0325 01:57:26.884915 431164 addons.go:417] enableAddons completed in 902.209882ms
I0325 01:57:28.137240 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:30.137382 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:28.127454 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:30.127857 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:30.450420 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:32.951294 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:32.137902 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:34.636994 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:36.637231 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:32.627061 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:34.627281 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:36.627716 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:35.450505 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:37.950444 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:38.637618 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:41.138151 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:38.628044 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:41.128288 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:40.450506 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:42.450985 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:43.637420 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:46.137000 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:43.627437 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:45.629027 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:44.949672 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:46.950297 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:48.137252 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:50.137524 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:48.127262 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:50.627821 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:49.450175 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:51.450356 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:52.638010 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:55.137979 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:52.628171 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:54.629330 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:57.127108 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:53.950613 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:56.449946 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:58.450110 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:57:57.637645 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:00.137151 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:57:59.127720 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:01.627485 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:00.451216 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:02.950770 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:02.137531 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:04.137755 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:06.637823 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:04.127661 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:06.127944 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:05.450556 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:07.451055 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:09.137247 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:11.137649 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:08.627986 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:11.127221 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:09.949891 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:11.950918 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:13.138175 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:15.637967 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:13.127386 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:15.628308 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:14.450791 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:16.949899 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:18.137346 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:20.137621 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:18.127661 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:20.627067 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:19.450727 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:21.950126 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:22.138039 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:24.637505 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:26.637944 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:22.627669 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:25.127702 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:24.450063 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:26.450260 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:28.450830 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:28.638663 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:31.137778 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:27.627188 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:29.627870 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:32.127319 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:30.950663 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:33.450641 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:33.137957 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:35.637360 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:34.127627 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:36.128027 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:35.950344 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:38.450663 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:37.637456 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:40.137522 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:38.128157 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:40.627116 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:40.949547 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:42.950881 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:42.637830 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:44.638149 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:42.627366 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:44.627901 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:47.127092 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:45.450029 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:47.951426 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:47.137013 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:49.137465 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:51.137831 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:49.127972 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:51.627645 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:50.450644 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:52.949935 440243 pod_ready.go:102] pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:53.138061 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:55.637301 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:54.128948 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:56.627956 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:54.453764 440243 pod_ready.go:81] duration metric: took 4m0.014071871s waiting for pod "calico-kube-controllers-8594699699-b8cwf" in "kube-system" namespace to be "Ready" ...
E0325 01:58:54.453795 440243 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
I0325 01:58:54.453817 440243 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-srh8z" in "kube-system" namespace to be "Ready" ...
I0325 01:58:56.465394 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:58.466164 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:58:57.637937 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:00.137993 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:58:59.131509 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:01.626847 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:00.466246 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:02.466356 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:02.138041 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:04.138262 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:06.637907 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:03.627991 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:06.128421 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:04.466551 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:06.965390 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:09.139879 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:11.637442 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:08.627165 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:10.628040 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:08.965530 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:10.966031 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:12.966329 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:13.637538 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:15.639122 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:13.127579 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:15.127811 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:15.466507 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:17.966052 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:18.137261 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:20.137829 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:17.628001 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:20.127516 442784 pod_ready.go:102] pod "coredns-64897985d-qsk2c" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:20.631571 442784 pod_ready.go:81] duration metric: took 4m0.015353412s waiting for pod "coredns-64897985d-qsk2c" in "kube-system" namespace to be "Ready" ...
E0325 01:59:20.631596 442784 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
I0325 01:59:20.631606 442784 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-x2v4t" in "kube-system" namespace to be "Ready" ...
I0325 01:59:20.633133 442784 pod_ready.go:97] error getting pod "coredns-64897985d-x2v4t" in "kube-system" namespace (skipping!): pods "coredns-64897985d-x2v4t" not found
I0325 01:59:20.633152 442784 pod_ready.go:81] duration metric: took 1.540051ms waiting for pod "coredns-64897985d-x2v4t" in "kube-system" namespace to be "Ready" ...
E0325 01:59:20.633160 442784 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-x2v4t" in "kube-system" namespace (skipping!): pods "coredns-64897985d-x2v4t" not found
I0325 01:59:20.633166 442784 pod_ready.go:78] waiting up to 5m0s for pod "etcd-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
I0325 01:59:20.637747 442784 pod_ready.go:92] pod "etcd-custom-weave-20220325014921-262786" in "kube-system" namespace has status "Ready":"True"
I0325 01:59:20.637768 442784 pod_ready.go:81] duration metric: took 4.596316ms waiting for pod "etcd-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
I0325 01:59:20.637780 442784 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
I0325 01:59:20.642175 442784 pod_ready.go:92] pod "kube-apiserver-custom-weave-20220325014921-262786" in "kube-system" namespace has status "Ready":"True"
I0325 01:59:20.642191 442784 pod_ready.go:81] duration metric: took 4.404746ms waiting for pod "kube-apiserver-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
I0325 01:59:20.642200 442784 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
I0325 01:59:20.825032 442784 pod_ready.go:92] pod "kube-controller-manager-custom-weave-20220325014921-262786" in "kube-system" namespace has status "Ready":"True"
I0325 01:59:20.825054 442784 pod_ready.go:81] duration metric: took 182.848289ms waiting for pod "kube-controller-manager-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
I0325 01:59:20.825064 442784 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-zv4v5" in "kube-system" namespace to be "Ready" ...
I0325 01:59:21.225297 442784 pod_ready.go:92] pod "kube-proxy-zv4v5" in "kube-system" namespace has status "Ready":"True"
I0325 01:59:21.225318 442784 pod_ready.go:81] duration metric: took 400.248182ms waiting for pod "kube-proxy-zv4v5" in "kube-system" namespace to be "Ready" ...
I0325 01:59:21.225330 442784 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
I0325 01:59:21.625682 442784 pod_ready.go:92] pod "kube-scheduler-custom-weave-20220325014921-262786" in "kube-system" namespace has status "Ready":"True"
I0325 01:59:21.625709 442784 pod_ready.go:81] duration metric: took 400.371185ms waiting for pod "kube-scheduler-custom-weave-20220325014921-262786" in "kube-system" namespace to be "Ready" ...
I0325 01:59:21.625721 442784 pod_ready.go:78] waiting up to 5m0s for pod "weave-net-fm6bn" in "kube-system" namespace to be "Ready" ...
I0325 01:59:19.966529 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:22.465926 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:22.637466 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:24.637948 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:24.032172 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:26.531791 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:24.466219 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:26.466280 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:27.137486 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:29.137528 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:31.137566 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:29.031481 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:31.531040 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:28.965577 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:31.465336 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:33.138065 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:35.637535 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:33.532410 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:36.031682 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:33.965866 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:36.465716 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:38.465892 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:37.637991 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:39.638114 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:38.530826 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:40.531743 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:40.966108 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:43.465526 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:42.137688 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:44.637241 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:46.637686 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:43.031806 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:45.531386 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:45.465729 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:47.967747 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:49.137625 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:51.638236 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:47.531588 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:49.531656 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:52.031683 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:50.466203 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:52.966050 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:54.137670 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:56.138392 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:54.032187 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:56.531093 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:55.466240 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:57.466490 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:58.637751 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:00.638089 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 01:59:58.531417 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:00.531966 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 01:59:59.966109 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:02.465830 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:03.137541 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:05.637552 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:03.031649 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:05.531282 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:04.965956 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:06.968356 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:08.137145 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:10.137534 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:07.531455 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:09.531699 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:12.032938 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:09.466106 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:11.965385 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:12.637732 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:15.138150 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:14.531694 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:16.531949 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:13.966907 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:16.466246 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:18.466374 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:17.637995 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:20.137994 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:19.031660 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:21.531699 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:20.966050 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:23.466019 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:22.637195 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:24.638276 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:24.032516 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:26.531380 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:25.466373 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:27.966783 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:27.137477 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:29.138059 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:31.138114 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:29.030968 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:31.031214 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:30.466003 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:32.966003 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:33.637955 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:35.638305 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:33.531559 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:36.031302 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:35.466050 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:37.966004 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:38.137342 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:40.138018 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:38.031823 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:40.531380 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:40.465841 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:42.966235 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:42.638060 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:45.137181 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:43.031453 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:45.031822 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:45.465711 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:47.965776 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:47.137290 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:49.137908 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:51.638340 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:47.531831 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:50.032302 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:49.966476 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:52.466039 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:54.137713 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:56.637016 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:52.531720 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:55.031662 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:54.966085 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:56.966359 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:58.637267 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:00.637464 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:00:57.531581 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:01:00.030994 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:01:02.031443 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:00:58.967286 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:01:01.466350 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:01:03.466445 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:01:02.638041 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:05.137294 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:04.031865 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:01:06.031960 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:01:05.966165 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:01:07.966194 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:01:07.137350 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:09.137969 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:11.638005 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:08.032116 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:01:10.532240 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:01:10.466003 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:01:12.466535 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:01:14.137955 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:16.637434 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:13.031864 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:01:15.531085 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:01:14.966152 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:01:17.466878 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:01:18.637978 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:21.137203 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:17.532033 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:01:20.031731 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:01:22.034273 442784 pod_ready.go:102] pod "weave-net-fm6bn" in "kube-system" namespace has status "Ready":"False"
I0325 02:01:19.966129 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:01:21.966568 440243 pod_ready.go:102] pod "calico-node-srh8z" in "kube-system" namespace has status "Ready":"False"
I0325 02:01:23.137475 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:25.137628 431164 node_ready.go:58] node "old-k8s-version-20220325015306-262786" has status "Ready":"False"
I0325 02:01:26.139331 431164 node_ready.go:38] duration metric: took 4m0.007092133s waiting for node "old-k8s-version-20220325015306-262786" to be "Ready" ...
I0325 02:01:26.141382 431164 out.go:176]
W0325 02:01:26.141510 431164 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
W0325 02:01:26.141527 431164 out.go:241] *
W0325 02:01:26.142250 431164 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
079cd3357f1fd 6de166512aa22 About a minute ago Running kindnet-cni 1 0b7c839dde6fb
8e7808702d5d6 6de166512aa22 4 minutes ago Exited kindnet-cni 0 0b7c839dde6fb
f84fedf62f62a c21b0c7400f98 4 minutes ago Running kube-proxy 0 8329903e5a1d1
2a8a16a4c5ab0 b305571ca60a5 4 minutes ago Running kube-apiserver 0 6257dca791a92
0dcaa5ddf16d7 06a629a7e51cd 4 minutes ago Running kube-controller-manager 0 4f6ca772f8d74
0f2defa775551 301ddc62b80b1 4 minutes ago Running kube-scheduler 0 64b5b98ae89a8
1366a173f44ad b2756210eeabf 4 minutes ago Running etcd 0 f07b14711b6c4
*
* ==> containerd <==
* -- Logs begin at Fri 2022-03-25 01:56:43 UTC, end at Fri 2022-03-25 02:01:27 UTC. --
Mar 25 01:57:01 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:01.788397018Z" level=info msg="StartContainer for \"0dcaa5ddf16d74bb0f7b672cf9c1f93a9049cfc9e9fa01287dfc31c913129a95\" returns successfully"
Mar 25 01:57:01 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:01.788552920Z" level=info msg="StartContainer for \"2a8a16a4c5ab06cec61505599bfcd94a42a8de336bbe343006809032ae98bee0\" returns successfully"
Mar 25 01:57:25 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:25.717807531Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Mar 25 01:57:25 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:25.957585408Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-wxllf,Uid:8df13659-eaff-4414-b783-5e971e2dae50,Namespace:kube-system,Attempt:0,}"
Mar 25 01:57:25 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:25.957585630Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-rx7hj,Uid:bf35a126-09fa-4db9-9aa4-2cb811bf4595,Namespace:kube-system,Attempt:0,}"
Mar 25 01:57:25 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:25.982307374Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8329903e5a1d1c800be8e2125d67bf84ec79a4aa9d91a6c8ba109f8ad1951fe0 pid=2399
Mar 25 01:57:25 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:25.985207180Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b7c839dde6fbbb78af061c24b63839c063e1b68d58c057dd9b9aad8baabf2fb pid=2414
Mar 25 01:57:26 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:26.097668598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wxllf,Uid:8df13659-eaff-4414-b783-5e971e2dae50,Namespace:kube-system,Attempt:0,} returns sandbox id \"8329903e5a1d1c800be8e2125d67bf84ec79a4aa9d91a6c8ba109f8ad1951fe0\""
Mar 25 01:57:26 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:26.101054577Z" level=info msg="CreateContainer within sandbox \"8329903e5a1d1c800be8e2125d67bf84ec79a4aa9d91a6c8ba109f8ad1951fe0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Mar 25 01:57:26 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:26.194056671Z" level=info msg="CreateContainer within sandbox \"8329903e5a1d1c800be8e2125d67bf84ec79a4aa9d91a6c8ba109f8ad1951fe0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f84fedf62f62a8e554f8fb0e89611f54b0df5ed4a16b1110ac42099248a8a41e\""
Mar 25 01:57:26 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:26.194625839Z" level=info msg="StartContainer for \"f84fedf62f62a8e554f8fb0e89611f54b0df5ed4a16b1110ac42099248a8a41e\""
Mar 25 01:57:26 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:26.207575966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-rx7hj,Uid:bf35a126-09fa-4db9-9aa4-2cb811bf4595,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b7c839dde6fbbb78af061c24b63839c063e1b68d58c057dd9b9aad8baabf2fb\""
Mar 25 01:57:26 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:26.210667921Z" level=info msg="CreateContainer within sandbox \"0b7c839dde6fbbb78af061c24b63839c063e1b68d58c057dd9b9aad8baabf2fb\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
Mar 25 01:57:26 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:26.305123248Z" level=info msg="CreateContainer within sandbox \"0b7c839dde6fbbb78af061c24b63839c063e1b68d58c057dd9b9aad8baabf2fb\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"8e7808702d5d6d554f961b1120eef82835bf0c35a13a50bc3c3deae13e17b0b7\""
Mar 25 01:57:26 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:26.306084994Z" level=info msg="StartContainer for \"8e7808702d5d6d554f961b1120eef82835bf0c35a13a50bc3c3deae13e17b0b7\""
Mar 25 01:57:26 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:26.489647000Z" level=info msg="StartContainer for \"f84fedf62f62a8e554f8fb0e89611f54b0df5ed4a16b1110ac42099248a8a41e\" returns successfully"
Mar 25 01:57:26 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T01:57:26.690812432Z" level=info msg="StartContainer for \"8e7808702d5d6d554f961b1120eef82835bf0c35a13a50bc3c3deae13e17b0b7\" returns successfully"
Mar 25 02:00:06 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:00:06.930815730Z" level=info msg="shim disconnected" id=8e7808702d5d6d554f961b1120eef82835bf0c35a13a50bc3c3deae13e17b0b7
Mar 25 02:00:06 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:00:06.930882983Z" level=warning msg="cleaning up after shim disconnected" id=8e7808702d5d6d554f961b1120eef82835bf0c35a13a50bc3c3deae13e17b0b7 namespace=k8s.io
Mar 25 02:00:06 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:00:06.930895328Z" level=info msg="cleaning up dead shim"
Mar 25 02:00:06 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:00:06.940936267Z" level=warning msg="cleanup warnings time=\"2022-03-25T02:00:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3317\n"
Mar 25 02:00:07 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:00:07.016635529Z" level=info msg="CreateContainer within sandbox \"0b7c839dde6fbbb78af061c24b63839c063e1b68d58c057dd9b9aad8baabf2fb\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
Mar 25 02:00:07 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:00:07.031284315Z" level=info msg="CreateContainer within sandbox \"0b7c839dde6fbbb78af061c24b63839c063e1b68d58c057dd9b9aad8baabf2fb\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"079cd3357f1fdb712691e0e2faf42ffa65a9f250899b730661a824d22e9c22e3\""
Mar 25 02:00:07 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:00:07.031721233Z" level=info msg="StartContainer for \"079cd3357f1fdb712691e0e2faf42ffa65a9f250899b730661a824d22e9c22e3\""
Mar 25 02:00:07 old-k8s-version-20220325015306-262786 containerd[516]: time="2022-03-25T02:00:07.104353921Z" level=info msg="StartContainer for \"079cd3357f1fdb712691e0e2faf42ffa65a9f250899b730661a824d22e9c22e3\" returns successfully"
*
* ==> describe nodes <==
* Name: old-k8s-version-20220325015306-262786
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=old-k8s-version-20220325015306-262786
kubernetes.io/os=linux
minikube.k8s.io/commit=e9bcad7e6ac6773a18692e93ac9e0eca8ee7cb95
minikube.k8s.io/name=old-k8s-version-20220325015306-262786
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_03_25T01_57_11_0700
minikube.k8s.io/version=v1.25.2
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 25 Mar 2022 01:57:05 +0000
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 25 Mar 2022 02:01:06 +0000 Fri, 25 Mar 2022 01:57:02 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 25 Mar 2022 02:01:06 +0000 Fri, 25 Mar 2022 01:57:02 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 25 Mar 2022 02:01:06 +0000 Fri, 25 Mar 2022 01:57:02 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Fri, 25 Mar 2022 02:01:06 +0000 Fri, 25 Mar 2022 01:57:02 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Addresses:
InternalIP: 192.168.76.2
Hostname: old-k8s-version-20220325015306-262786
Capacity:
cpu: 8
ephemeral-storage: 304695084Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32873824Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304695084Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32873824Ki
pods: 110
System Info:
Machine ID: 140a143b31184b58be947b52a01fff83
System UUID: 586019ba-8c2c-445d-9550-f545f1f4ef4d
Boot ID: 63fce5d9-a30b-498a-bfed-7dd46d23a363
Kernel Version: 5.13.0-1021-gcp
OS Image: Ubuntu 20.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.5.10
Kubelet Version: v1.16.0
Kube-Proxy Version: v1.16.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system etcd-old-k8s-version-20220325015306-262786 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m10s
kube-system kindnet-rx7hj 100m (1%!)(MISSING) 100m (1%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 4m2s
kube-system kube-apiserver-old-k8s-version-20220325015306-262786 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m57s
kube-system kube-controller-manager-old-k8s-version-20220325015306-262786 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m16s
kube-system kube-proxy-wxllf 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m2s
kube-system kube-scheduler-old-k8s-version-20220325015306-262786 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3m12s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 650m (8%!)(MISSING) 100m (1%!)(MISSING)
memory 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeAllocatableEnforced 4m27s kubelet, old-k8s-version-20220325015306-262786 Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m26s (x8 over 4m27s) kubelet, old-k8s-version-20220325015306-262786 Node old-k8s-version-20220325015306-262786 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m26s (x8 over 4m27s) kubelet, old-k8s-version-20220325015306-262786 Node old-k8s-version-20220325015306-262786 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m26s (x7 over 4m27s) kubelet, old-k8s-version-20220325015306-262786 Node old-k8s-version-20220325015306-262786 status is now: NodeHasSufficientPID
Normal Starting 4m1s kube-proxy, old-k8s-version-20220325015306-262786 Starting kube-proxy.
*
* ==> dmesg <==
* [ +0.000006] ll header: 00000000: 02 42 d0 f2 08 b6 02 42 c0 a8 3a 02 08 00
[ +7.006669] IPv4: martian source 10.85.0.21 from 10.85.0.21, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 e5 8c f4 3b 15 08 06
[Mar25 02:00] IPv4: martian source 10.85.0.22 from 10.85.0.22, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff 7e 3a e8 93 0f f0 08 06
[ +11.785527] IPv4: martian source 10.85.0.23 from 10.85.0.23, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 46 52 04 02 2c 26 08 06
[ +8.370268] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev br-a040cc4bab62
[ +0.000006] ll header: 00000000: 02 42 d0 f2 08 b6 02 42 c0 a8 3a 02 08 00
[ +4.995582] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev br-a040cc4bab62
[ +0.000006] ll header: 00000000: 02 42 d0 f2 08 b6 02 42 c0 a8 3a 02 08 00
[ +1.989183] IPv4: martian source 10.85.0.24 from 10.85.0.24, on dev eth0
[ +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 76 31 e2 ca 4f 08 06
[ +3.010521] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev br-a040cc4bab62
[ +0.000006] ll header: 00000000: 02 42 d0 f2 08 b6 02 42 c0 a8 3a 02 08 00
[ +12.328647] IPv4: martian source 10.85.0.25 from 10.85.0.25, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a ae 44 d1 e4 c8 08 06
[Mar25 02:01] IPv4: martian source 10.85.0.26 from 10.85.0.26, on dev eth0
[ +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a f5 b0 71 56 83 08 06
[ +12.211857] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev br-a040cc4bab62
[ +0.000007] ll header: 00000000: 02 42 d0 f2 08 b6 02 42 c0 a8 3a 02 08 00
[ +4.294695] IPv4: martian source 10.85.0.27 from 10.85.0.27, on dev eth0
[ +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 99 aa 90 20 5f 08 06
[ +0.701780] IPv4: martian source 10.244.0.232 from 10.244.0.3, on dev br-a040cc4bab62
[ +0.000005] ll header: 00000000: 02 42 d0 f2 08 b6 02 42 c0 a8 3a 02 08 00
*
* ==> etcd [1366a173f44ada0abf1e4f2c5003b1d9df1c0ee0a950928cdf3a5f3f7048faaa] <==
* 2022-03-25 01:57:01.789418 I | etcdserver: initial cluster = old-k8s-version-20220325015306-262786=https://192.168.76.2:2380
2022-03-25 01:57:01.795636 I | etcdserver: starting member ea7e25599daad906 in cluster 6f20f2c4b2fb5f8a
2022-03-25 01:57:01.795668 I | raft: ea7e25599daad906 became follower at term 0
2022-03-25 01:57:01.795679 I | raft: newRaft ea7e25599daad906 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2022-03-25 01:57:01.795684 I | raft: ea7e25599daad906 became follower at term 1
2022-03-25 01:57:01.803372 W | auth: simple token is not cryptographically signed
2022-03-25 01:57:01.806268 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
2022-03-25 01:57:01.807413 I | etcdserver: ea7e25599daad906 as single-node; fast-forwarding 9 ticks (election ticks 10)
2022-03-25 01:57:01.807883 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
2022-03-25 01:57:01.808954 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2022-03-25 01:57:01.809140 I | embed: listening for metrics on http://127.0.0.1:2381
2022-03-25 01:57:01.809206 I | embed: listening for metrics on http://192.168.76.2:2381
2022-03-25 01:57:02.596023 I | raft: ea7e25599daad906 is starting a new election at term 1
2022-03-25 01:57:02.596060 I | raft: ea7e25599daad906 became candidate at term 2
2022-03-25 01:57:02.596077 I | raft: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
2022-03-25 01:57:02.596090 I | raft: ea7e25599daad906 became leader at term 2
2022-03-25 01:57:02.596097 I | raft: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
2022-03-25 01:57:02.596295 I | etcdserver: setting up the initial cluster version to 3.3
2022-03-25 01:57:02.597359 N | etcdserver/membership: set the initial cluster version to 3.3
2022-03-25 01:57:02.597406 I | etcdserver/api: enabled capabilities for version 3.3
2022-03-25 01:57:02.597440 I | etcdserver: published {Name:old-k8s-version-20220325015306-262786 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
2022-03-25 01:57:02.597617 I | embed: ready to serve client requests
2022-03-25 01:57:02.597747 I | embed: ready to serve client requests
2022-03-25 01:57:02.600650 I | embed: serving client requests on 192.168.76.2:2379
2022-03-25 01:57:02.601990 I | embed: serving client requests on 127.0.0.1:2379
*
* ==> kernel <==
* 02:01:27 up 4:39, 0 users, load average: 1.41, 1.82, 1.95
Linux old-k8s-version-20220325015306-262786 5.13.0-1021-gcp #25~20.04.1-Ubuntu SMP Thu Mar 17 04:09:01 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.4 LTS"
*
* ==> kube-apiserver [2a8a16a4c5ab06cec61505599bfcd94a42a8de336bbe343006809032ae98bee0] <==
* I0325 01:57:05.741087 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
E0325 01:57:05.742225 1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.76.2, ResourceVersion: 0, AdditionalErrorMsg:
I0325 01:57:05.747229 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0325 01:57:05.747261 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0325 01:57:05.883908 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0325 01:57:05.883932 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0325 01:57:05.884126 1 cache.go:39] Caches are synced for autoregister controller
I0325 01:57:05.884201 1 shared_informer.go:204] Caches are synced for crd-autoregister
I0325 01:57:06.739679 1 controller.go:107] OpenAPI AggregationController: Processing item
I0325 01:57:06.739704 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0325 01:57:06.739717 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0325 01:57:06.743177 1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0325 01:57:06.747597 1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0325 01:57:06.747620 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0325 01:57:07.493498 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0325 01:57:08.520754 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0325 01:57:08.800880 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0325 01:57:09.114170 1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
I0325 01:57:09.114813 1 controller.go:606] quota admission added evaluator for: endpoints
I0325 01:57:09.966541 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0325 01:57:10.500104 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0325 01:57:10.871924 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0325 01:57:25.143684 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0325 01:57:25.153906 1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
I0325 01:57:25.619240 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
*
* ==> kube-controller-manager [0dcaa5ddf16d74bb0f7b672cf9c1f93a9049cfc9e9fa01287dfc31c913129a95] <==
* I0325 01:57:25.519444 1 shared_informer.go:204] Caches are synced for disruption
I0325 01:57:25.519471 1 disruption.go:341] Sending events to api server.
I0325 01:57:25.564979 1 shared_informer.go:204] Caches are synced for persistent volume
I0325 01:57:25.567532 1 shared_informer.go:204] Caches are synced for node
I0325 01:57:25.567556 1 range_allocator.go:172] Starting range CIDR allocator
I0325 01:57:25.567570 1 shared_informer.go:197] Waiting for caches to sync for cidrallocator
I0325 01:57:25.569098 1 shared_informer.go:204] Caches are synced for HPA
I0325 01:57:25.569516 1 shared_informer.go:204] Caches are synced for TTL
I0325 01:57:25.615069 1 shared_informer.go:204] Caches are synced for daemon sets
I0325 01:57:25.619293 1 shared_informer.go:204] Caches are synced for taint
I0325 01:57:25.619399 1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone:
W0325 01:57:25.619533 1 node_lifecycle_controller.go:903] Missing timestamp for Node old-k8s-version-20220325015306-262786. Assuming now as a timestamp.
I0325 01:57:25.619601 1 node_lifecycle_controller.go:1058] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0325 01:57:25.619813 1 taint_manager.go:186] Starting NoExecuteTaintManager
I0325 01:57:25.619960 1 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"old-k8s-version-20220325015306-262786", UID:"f6951a5c-6edc-46f8-beec-3c90a8b9581c", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node old-k8s-version-20220325015306-262786 event: Registered Node old-k8s-version-20220325015306-262786 in Controller
I0325 01:57:25.627002 1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"9ebcce20-95c8-46a7-994a-18f1bc7bd92e", APIVersion:"apps/v1", ResourceVersion:"232", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-rx7hj
I0325 01:57:25.629138 1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"6d02422f-16d5-4e4d-a5bf-93392a263b1e", APIVersion:"apps/v1", ResourceVersion:"221", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-wxllf
E0325 01:57:25.636892 1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"6d02422f-16d5-4e4d-a5bf-93392a263b1e", ResourceVersion:"221", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63783770230, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001b60f60), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Names
pace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeS
ource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001c3a040), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001b60f80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001b60fa0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.A
zureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.16.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001b60fe0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMo
de)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001685180), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0016811f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServic
eAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001643860), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy
{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0002ceee8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001681238)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0325 01:57:25.667564 1 shared_informer.go:204] Caches are synced for resource quota
I0325 01:57:25.667705 1 shared_informer.go:204] Caches are synced for cidrallocator
I0325 01:57:25.669937 1 shared_informer.go:204] Caches are synced for resource quota
I0325 01:57:25.670463 1 range_allocator.go:359] Set node old-k8s-version-20220325015306-262786 PodCIDR to [10.244.0.0/24]
I0325 01:57:25.679094 1 shared_informer.go:204] Caches are synced for garbage collector
I0325 01:57:25.722642 1 shared_informer.go:204] Caches are synced for garbage collector
I0325 01:57:25.722667 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-proxy [f84fedf62f62a8e554f8fb0e89611f54b0df5ed4a16b1110ac42099248a8a41e] <==
* W0325 01:57:26.609517 1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
I0325 01:57:26.688448 1 node.go:135] Successfully retrieved node IP: 192.168.76.2
I0325 01:57:26.688492 1 server_others.go:149] Using iptables Proxier.
I0325 01:57:26.688881 1 server.go:529] Version: v1.16.0
I0325 01:57:26.690169 1 config.go:131] Starting endpoints config controller
I0325 01:57:26.690202 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0325 01:57:26.690377 1 config.go:313] Starting service config controller
I0325 01:57:26.690393 1 shared_informer.go:197] Waiting for caches to sync for service config
I0325 01:57:26.790460 1 shared_informer.go:204] Caches are synced for endpoints config
I0325 01:57:26.790538 1 shared_informer.go:204] Caches are synced for service config
*
* ==> kube-scheduler [0f2defa775551729a53f4b102a79f5f1c8e3853bbb12ba362f6555860b09d99a] <==
* I0325 01:57:05.800810 1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
E0325 01:57:05.892456 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0325 01:57:05.892758 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0325 01:57:05.892875 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0325 01:57:05.892975 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0325 01:57:05.893150 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0325 01:57:05.893319 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0325 01:57:05.893573 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0325 01:57:05.894058 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0325 01:57:05.894470 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0325 01:57:05.894601 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0325 01:57:05.894681 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0325 01:57:06.894818 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0325 01:57:06.895872 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0325 01:57:06.897095 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0325 01:57:06.898221 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0325 01:57:06.899310 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0325 01:57:06.900400 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0325 01:57:06.901503 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0325 01:57:06.902607 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0325 01:57:06.903724 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0325 01:57:06.904742 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0325 01:57:06.905998 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0325 01:57:25.156410 1 factory.go:585] pod is already present in the activeQ
E0325 01:57:25.162943 1 factory.go:585] pod is already present in the activeQ
*
* ==> kubelet <==
* -- Logs begin at Fri 2022-03-25 01:56:43 UTC, end at Fri 2022-03-25 02:01:27 UTC. --
Mar 25 01:59:25 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 01:59:25.901732 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 01:59:30 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 01:59:30.902249 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 01:59:35 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 01:59:35.902903 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 01:59:40 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 01:59:40.903408 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 01:59:45 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 01:59:45.904051 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 01:59:50 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 01:59:50.904601 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 01:59:55 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 01:59:55.905273 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 02:00:00 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:00.905816 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 02:00:05 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:05.906484 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 02:00:10 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:10.906998 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 02:00:15 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:15.907739 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 02:00:20 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:20.908340 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 02:00:25 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:25.909014 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 02:00:30 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:30.909847 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 02:00:35 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:35.910486 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 02:00:40 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:40.911183 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 02:00:45 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:45.911882 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 02:00:50 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:50.912489 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 02:00:55 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:00:55.913113 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 02:01:00 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:01:00.913828 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 02:01:05 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:01:05.914441 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 02:01:10 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:01:10.915161 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 02:01:15 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:01:15.915890 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 02:01:20 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:01:20.916693 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Mar 25 02:01:25 old-k8s-version-20220325015306-262786 kubelet[1069]: E0325 02:01:25.917325 1069 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
-- /stdout --
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220325015306-262786 -n old-k8s-version-20220325015306-262786
helpers_test.go:262: (dbg) Run: kubectl --context old-k8s-version-20220325015306-262786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: coredns-5644d7b6d9-trm4j storage-provisioner
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: describe non-running pods <======
helpers_test.go:276: (dbg) Run: kubectl --context old-k8s-version-20220325015306-262786 describe pod coredns-5644d7b6d9-trm4j storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220325015306-262786 describe pod coredns-5644d7b6d9-trm4j storage-provisioner: exit status 1 (49.806283ms)
** stderr **
Error from server (NotFound): pods "coredns-5644d7b6d9-trm4j" not found
Error from server (NotFound): pods "storage-provisioner" not found
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20220325015306-262786 describe pod coredns-5644d7b6d9-trm4j storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (501.61s)